Irreproducibility in Preclinical Biomedical Research: Perceptions, Uncertainties, and Knowledge Gaps.

Over the last several years, many articles and commentaries were written on the subject of „reproducibility of research data“. In this new review, Jarvis and Williams provide a very refreshing view arguing that „perceptions of research irreproducibility in the preclinical sciences are based on limited, frequently anecdotal data from select therapeutic areas of cell and molecular biology“. The review is certainly worth reading and its statement clearly indicates where the urgent need is today. Indeed, besides some limited number of analyses done on real data and theoretical considerations, hard evidence is missing. In the absence of this information, there is a danger that emerging efforts in the field of „reproducibility“ have a „potential to inhibit scientific innovation“, as noted by Jarvis and Williams. (Trends Pharmacol Sci. 2016 DOI: http://dx.doi.org/10.1016/j.tips.2015.12.001)

Additional Reads February 2016

Use of positive and negative words in scientific PubMed abstracts between 1974 and 2014: retrospective analysis. Christiaan Vinkers and colleagues at Utrecht University reported in the British Medical Journal results of the lexicographic analysis of the use of positive and negative words in scientific abstracts. Over the last four decades, the absolute frequency of positive words (e.g. ‘robust’, ‘novel’, ‘innovative’ and unprecedented’) increased from 2.0% (1974-80) to 17.5% (2014), a relative increase of 880% (BMJ 2015;351:h6467 doi: 10.1136/bmj. h6467).
Evaluating research: ‘A multidisciplinary approach to assessing research practice and quality’. In this article, P. Martensson et al. developed a concept model of research, which was used as a basis for determining and discussing different aspects of quality and evaluation in research. According to their concept hierarchy, research quality was defined by four main areas labeled ‘Credible’, ‘Contributory’, ‘Communicable’ and ‘Conforming’. The intention was to create a platform for evaluations of research quality within and across specific disciplines and thereby to contribute to efforts to improve research quality and understanding (Research Policy, Volume 45, Issue 3, April 2016, Pages 593–603).
Reproducible Research Practices and Transparency across the Biomedical Literature. A team of authors, led by John P.A. Ioannidis, analyzed a randomly chosen set of 441 PubMed-indexed papers related to biomedicine which were published between 2000 and 2014. Among these 441 publications, 268 included empirical data. Of these 268, only one study provided a complete and full protocol. None of the selected 268 papers provided means to access full datasets; only one mentioned making complete raw data available upon request. The majority of studies did not contain any conflict of interest statement (69.2%) and only half of all papers included funding information (51.7%). Interestingly, only 1.5% of papers with empirical data were replication studies. Altogether these results demonstrate the continued need for improving reproducibility and transparency practices in the biomedical literature. (PLOS Biology, DOI:10.1371/journal.pbio.1002333 January 4, 2016).

Robust science needs robust corrections.

David Allison and his colleagues argue that mistakes in peer-reviewed papers are easy to find but hard to fix. In this Nature commentary, six key problems are identified and excellent suggestions are made on how to improve the current situation. Most interestingly, authors argue that «scientists who engage in post-publication review often do so out of a sense of duty to their community» but these efforts need to be recognized and incentivised. (Nature. 2016 Feb 4;530(7588):27-9. doi: 10.1038/530027a.).

Reproducible Research Practices and Transparency across the Biomedical Literature.

A team of authors, led by John P.A. Ioannidis, analyzed a randomly chosen set of 441 PubMed-indexed papers related to biomedicine which were published between 2000 and 2014. Among these 441 publications, 268 included empirical data. Of these 268, only one study provided a complete and full protocol. None of the selected 268 papers provided means to access full datasets; only one mentioned making complete raw data available upon request. The majority of studies did not contain any conflict of interest statement (69.2%) and only half of all papers included funding information (51.7%). Interestingly, only 1.5% papers with empirical data were replication studies. Altogether these results demonstrate the continued need for improving reproducibility and transparency practices in the biomedical literature. (PLOS Biology, DOI:10.1371/journal.pbio.1002333 January 4, 2016).