Die Reproduzierbarkeitskrise: Bedrohung oder Chance für die Wissenschaft? (In German)

In this Editorial, published in Biologie in unserer Zeit, Martin C. Michel and Ralf Dahm discuss threats and opportunities related to the current reproducibility crisis in biomedical sciences.
The authors highlight several top-down approaches currently in place to increase data quality and reproducibility: the BMBF, the EU or the NIH have launched research programs on the topic of reproducibility; various specialist journals (e.g. Nature or Molecular Pharmacology) have adapted their guidelines for authors; and the DFG has published newguidelines for Good Scientific Practice and declared them binding for all DFG-funded scientists.
In addition, there is also an increasing number of bottom-up initiatives, such as the European Quality in Preclinical Data (EQIPD) project (https://quality-preclinical-data.eu/) or the Global Preclinical Data Forum (https://www.preclinicaldataforum.org). Such initiatives as well as professional organizations like the PAASP Network (e.g. www.paasp.net) offer solutions, advice and training to promote preclinical data quality.


eNeuro publishes Series on Scientific Rigor

Christophe Bernard, editor in chief of eNeuro, mentions in this editorial the issue of scientific rigor with a reference to two Commentaries by Katherine Button and Oswald Steward. To show that it is not a novel phenomenon, he gives the wonderful example of a dispute between Louis Pasteur and Claude Bernard. Interestingly, since then clear guidelines and proper training do not have the focus they should have. To overcome this problem, scientists should on the one hand be more critical with their own observations and make a clear statement when they have exploratory data. On the other hand, scientists should receive better training on scientific rigor. eNeuro is establishing a webinar series to tackle the latter issue. Link

Commentary by Katherine Button: Statistical Rigor and the Perils of Chance

Button discusses the role of chance in statistical inference and how poor study design lead to a high number of false-positive data. Furthermore, she claims that the current publication and funding system perpetuates this problem by only encouraging to publish positive data. Link

Commentary by Oswald Steward: A Rhumba of “R’s”: Replication, Reproducibility, Rigor, Robustness: What Does a Failure to Replicate Mean?

The commentary by Steward points out many issues that are common practice in daily lab routine and the follow up publications. He refers to “Begley’s 6 red flags” and provides another list of points which he suggests could be called “the 6 gold stars of rigor”. These gold stars are suggested to be implemented as common publishing practices and e.g. include reporting of statistical power, requirement to report timing on data collection and to report all analysis in this context. Link

An incentive-based approach for improving data reproducibility

In the editorial published in Science Translational Medicine, Michael Rosenblatt, Merck’s executive vice president and chief medical officer, said bad results from academic labs caused pharmaceutical companies to waste millions and “threatens the entire biomedical research enterprise.” He suggests an incentive-based approach for improving data reproducibility that is essentially a “full or partial money-back guarantee.” That is, if research that drug companies pay for turns out to be wrong, universities would have to give back the funding they got. Merck thinks this will put the pressure right where it belongs, on the scientists. Read more.