In this Viewpoint article, published in JAMA on Feb 13, 2017, John P.A. Ioannidis summarizes critical factors (e.g. poor statistical methods, suboptimal study design, small sample size, etc.) leading to the observed high rate of non-reproducible research studies. This finding is underlined by recently published replication efforts on 5 cancer biology topics (Nosek, BA; Elife. 2017 Jan 19; 6). Here, the reproducibility attempts demonstrated that unanticipated outcomes (e.g. the unforeseen spontaneous regression of tumors) further complicated the experiments and experimental outcomes diverge even with minor modifications in the experimental conditions.

However, the author also points out that it is impossible to be certain whether the original experiments, the subsequent experiment, both, or none are correct or wrong.

Although biological processes are very complex and multifactorial, scientists usually do not report essential factors of laboratory experiments detailed enough to allow a proper repetition of the experiment. This raises an important question: ‘Would a process that is so sensitive to background conditions be of interest for translational processes; e.g. developing a treatment or diagnostic test for widespread clinical use? If a research finding changes abruptly for example within one minute of experimental manipulation in animals or cell cultures, is that approach going to work reliably when involving the even more complex biology of individual humans?’ Here, nonreproducibility may still have an important function as it can offer insights about why results do not replicate and what the important parameters are that shape responses in experimental systems.

In contrast to clinical trials, preclinical and basic research only has very limited external oversight and regulations and the misaligned incentive system (‘publish or perish’), the use of poor research methods (see above) and the lack of transparency may explain the complex problem of nonreproducibility in bench research. Therefore, the author suggests that rewards and incentives should focus on reproducible results, open science, transparency, rigorous experimental methods, and efficient safeguards. For example, funding, hiring and promotion decisions could consider whether a scientist has a record of sharing data, protocols and software, and high-quality experimental standards. Preclinical research methods can be improved similarly to the changes that have occurred in clinical investigation, particularly in the conduct and reporting of randomized clinical trials.

Basic and preclinical biomedical research is extremely important to improve human health, however, the research community should reassess whether it can have the luxury of continuing to fund so much research that is nonreproducible.