The age of hyperauthoring
2015 was a record-breaking year in scientific literature, a domain usually preoccupied with discoveries rather than records. That year over 5000 scientists were credited as authors for a paper on the mass estimation of Higgs boson in Physical Review Letters. This so-called hyperauthored paper was over 33-page long, but the author list was actually twice the length of the article itself. Later, in 2018, Nature Index reported that there were over 100 research articles co-authored by more than 1000 scientists. The emergence of hyperauthored papers reflects the fact that science nowadays is a global enterprise that requires large-scale collaboration. The days of the lone scientist, toiling over a research problem, are long gone.
Researchers, in a way, collaborate not only across vast distances but also through time. Scientific progress would not be possible if not for the cumulative effort of generations of scientists. Each new generation “stands on the shoulders of giants” and relies on published work to learn from those who came years, decades or centuries before.
Yet, learning from the past is not a passive process. The inquisitive attitude that drives science forward also has to be applied to studying published literature. Scientists implicitly agree that no result, no matter how well-established, is beyond scrutiny. This agreement is built on the principle that assumptions in the studies are clearly stated and there is enough detail for independent replication. However, as the scientific enterprise expands, it has become more challenging to honour this commitment. Details crucial for replication are often left behind in the fast pace of scientific advancement.
Achieving the transparency needed to preserve the ‘self-correcting’ nature of science is frequently tedious. It requires repetitive testing and reporting of minute details. Furthermore, incentives don’t favour this kind of work — replication studies are hard to publish and rarely help scientists advance their careers. With scientists spending most of their time in labour-intensive experiments and just a few hours in genuine scientific work, it is only natural that they would rather move forward than revisit what appears to be already established. This predicament has serious implications for biomedical research, where discovery relies on a variety of elaborate manual and error-prone methods. Even in this era where all other sectors are being transformed by robotics and automation, thousands of scientists are still bogged down by repetitive pipetting and keeping lengthy handwritten records.
The reproducibility crisis
Over time, this has placed science on unsteady foundations, leading to what has been called the reproducibility crisis. Scientists in academia and industry are well aware of this problem. Nature surveyed 1,576 biomedical researchers and more than 70% of respondents claimed to have failed in reproducing published results. In turn, Bayer’s target discovery team looked back at 67 target-validation projects related to their work in oncology, women’s health, and cardiovascular medicine. The results were dismal — only 14 projects matched up with the published findings. “Among researchers in the biomedical industry, it has been known for a long time that a lot of peer-reviewed findings cannot be reproduced. We quantified this issue, and decided to share our findings with the community. In my opinion, the lack of reproducibility has reached crisis point, and urgently needs to be addressed.”, explains Professor Asadullah, lead author of the Bayer study and member of the Arctoris Advisory Board.
The effects of this challenge go far beyond the ivory tower. Basic biomedical research is the cornerstone of drug discovery, where progress has stalled in recent years. Lack of reproducibility undermines the trust in science and the efforts of scientists and funders alike. In the United States alone, a recent study estimated that approximately US $28 billion per year are spent on preclinical research that is not reproducible. The crisis also makes investors wary. As a rule of thumb, VC funds and pharmaceutical companies estimate that they will not be able to replicate from 50 to 65% of the published research.
Among researchers in the biomedical industry it has been known for a long time that a lot of peer-reviewed findings cannot be reproduced. We quantified this issue, and decided to share our findings with the community. In my opinion, the lack of reproducibility has reached crisis point, and urgently needs to be addressed.
— Professor Khusru Asadullah
The reproducibility crisis is a complex and multifaceted problem rooted as much in technical limitations as in sociological aspects. However, it is clear that increasing transparency and reducing experimental idiosyncrasies will be central to a successful resolution. Regulators can nudge the community in this direction. A good example is the pre-registration requirement imposed by National Institutes of Health which compels investigators to enter the details of their study to clinicaltrials.gov before collecting any data. The UK Reproducibility Network as a grassroots initiative of researchers is another proponent of preregistration of research projects, and first results already show a marked increase in the diversity of findings reported. Publishers and scientists can also collaborate to foster transparency by encouraging best practices and creating outputs for traditionally underreported research such as not-significant or negative results. Notable examples are the Journal of Pharmaceutical Negative Results and the F1000 blog.
Boosting researchers capabilities with automation
However, these measures must not become an extra load on already overburdened scientists. Instead, we must provide researchers with adequate tools to produce high-quality reproducible results at the pace required by modern society. Most lab scientists spend the majority of their days with manual lab work, rather than focusing on study design and interpretation. The impressive advances in robotics and artificial intelligence made over the past years offer an alternative — let scientists focus on the big picture and leave the execution to robots. Robots excel at repetitive tasks, speed, and record-keeping, while human intelligence is unparalleled at asking profound scientific questions and interpreting results. Thus, automating laboratory workflows can help augment scientists’ capabilities to produce ground-breaking and reproducible research.
At Arctoris, we can boost R&D capabilities and open previously inaccessible research avenues. We have created a fully-automated platform for drug discovery and life sciences research, which significantly accelerates the scientific process. Our robots adhere to validated protocols and collect fully reproducible data without human errors and variability. Further, the system can collect data throughout the entire course of the experiment rather than just at the end-points. This generates detailed audit trails that provide an extra layer of information crucial for reproducibility.
Our platform offers remote access for scientists, enabling them to conduct their research from anywhere in the world, at any time of day. Our goal is to democratise access to research capabilities. Even during the COVID-19 pandemic, the Arctoris lab reliably conducts experiments 24/7. With Arctoris, researchers have more time to focus on the bigger picture, formulate conclusions or identify new directions for scientific enquiry — in other words, do genuinely scientific work that requires the knowledge and creativity only humans have.