Ranking major and minor research misbehaviors: results from a survey among participants of four World Conferences on Research Integrity
This recently published survey identifies the most frequent problem in science: sloppy research. The survey was performed among participants of the World Conference on Research Integrity and contained 60 questions to be answered (from 1500+ participants 17% replied). However, even though fabrication of data was ranked high when asking for the impact on truth, the frequency for this dishonorable behavior seemed to be quite low. In contrast to performing sloppy science in form of selective reporting, selective citing, and flaws in quality assurance and mentoring – here, frequency and occurrence were ranked much higher!
This is actually nicely mirrored in two publications dealing with the costs for society. The publication by Stern et al. 2014 numbered the costs for fabricated and falsified data to 400.000US$ for each paper. This cumulates to costs of 58 million US$ for the period between 1992 and 2012 according to the authors. However, the costs for unreproducible data in the US alone were numbered to 28 billion US$ for every year (Freedman et al. 2015). These examples show that the hyped stories about falsified data by news media are only a minor problem. The real issue seems to be the smoldering problem of sloppy science that eats up so many resources, demonstrating that the need for targeting the ‘lack of Good Research Practice’ has never been more urgent than now.
Current Incentives for Scientists Lead to Underpowered Studies with Erroneous Conclusions
In this study, published in PLOS Biology, A.D. Higginson and M.R. Munafò used a mathematical model to predict how an optimal researcher who is trying to maximise the impact of published research articles should best spend his or her time and effort. Scientific researchers have to decide which proportion of their time they want to invest in looking for exciting new results rather than confirming previous findings. They also must decide how many resources they want to invest in each experiment. The model shows that the best approach for career progression is to carry out many small exploratory studies rather than confirmatory ones. This behavior, ultimately, leads to less ‘real effects’ being identified and more false positive results being published. As the authors state ‘The best thing for scientific progress would be a mixture of medium-sized exploratory studies with large confirmatory studies. Our work suggests that researchers would be more likely to do this if funding agencies and promotion committees rewarded asking important questions and good methodology, rather than surprising findings and exciting interpretations.’
Irreproducibility of published bioscience research: Diagnosis, pathogenesis and therapy
In this commentary, published in Molecular Metabolism, Jeffrey S. Flier described some of the current issues leading to causes of research irreproducibility in biomedical sciences. He highlights that a better understanding is needed to clarify the role of the following factors, so that appropriate counter measure ca be designed: 1. Poor experimental methodology, 2. Poorly characterized reagents, 3. Deficient oversight, training and mentorship, 4. Complex collaborative arrangements, 5. Inappropriate responses to incentives distinct from those directly related to the conduct of science, 6. Ethical lapses and sociopathy, 7. Incentives produced by the major funders of bioscience research, and 8. Incentives produced by the publishing system.
The author then discusses potential responses to address the problem of research irreproducibility such as a) Enhance training in experimental design, statistics, proper use of reagents, data management, research and publication ethics; b) Place a greater emphasis on the reproducibility and importance of published research by faculty and a reduced emphasis on the number of publications and journal in which they are published; c) Develop increased expectations for open data as the standard approach in all publications; d) Promote changes in scientific publishing to facilitate research reproducibility; and e) Changes to the culture of scientific research.
Finally, J.S. Flier points out that, although more research into the nature and causes of irreproducible bioscience research is needed, ‘we know enough today about the relevant facts to initiate remediating actions in many areas. Since so many institutions and cultural domains are involved, multiple approaches must be tried, with as much communication and, where possible, coordination among them.’
OSF PREPRINTS, the open preprint repository networkThe Open Science Framework (OSF) has just released the latest free scholarly service, OSF Preprints. Researchers from any discipline can upload a preprint to this general service to get quick feedback on research and data sets, or upload to one of the branded preprint services supported, such as engrXiv, SocArXiv, or PsyArXiv. These were built on the OSF to share preprints within a specific research community.
OSF Preprints also makes use of SHARE, which has indexed over a million examples of the latest research, and enables scientists to search across other preprint providers like arXiv, bioRxiv, and PeerJ.
IICARus – a randomised controlled trial of an Intervention to Improve Compliance with the ARRIVE guidelines
The IICARus project is a randomised controlled study to assess whether mandating the completion of an ARRIVE checklist improves full compliance with the ARRIVE guidelines. Manuscripts, limited to in vivo studies, will be scored by two independent reviewers against the operationalised ARRIVE checklist, blinded both to intervention status and to the scores from the other reviewer. Discrepancies will be resolved by a third reviewer who will be blinded to the identity, and unblinded to the scores, of the previous reviewers.
The IICARus initiators are looking for reviewers involved in assessing manuscripts. In addition to gaining review skills and earning prizes reviewers will also be listed as collaborators in the resulting publication. There is an online training module accompanied by a resource, which is a prerequisite for becoming a reviewer for this study.
Those who are interested to register and start the training for this study may sign up HERE
A Laboratory Critical Incident and Error Reporting System for Experimental Biomedicine
Incident reporting has its origins in the 1950s within the aviation industry where it has been seen to be successful in reducing the number of incidents and to improve the safety of pilots. Risk management activities and Critical Incident Reporting (CIR) has been later introduced in clinical medicine and e.g. practitioners are expected to report occurrences that resulted or almost result (near miss) in patient injury so that it is possible to learn from these incidents.
However, a functional CIR system (CIRS) has never been implemented in the context of academic basic research. In this article, U. Dirnagl and colleagues describe the development of a free, open-source software tool (LabCIRS) written in the Python programming language which can be used to implement a simple CIR system in research groups, laboratories, or institutions. Importantly, LabCIRS is easy to set up, use and administer and does not require a large commitment of resources and time. As pointed out by the authors, after its implementation, the system has already ‘led to the emergence of a mature error culture, and has made the laboratory a safer and more communicative environment’ and could therefore become a valuable tool to increase integrity of preclinical biomedical research.
A demo version is accessible at http://labcirs.charite.de (sign in as “reporter”).
Guidelines to improve animal study design and reproducibility for Alzheimer’s disease and related dementias: For funders and researchers
The poor reproducibility and translatability of preclinical research, especially animal studies, is a growing concern for all stakeholders, including academic and industry researchers, medical journal editors, and funding organizations (government and non-governmental). In this ‘perspective’ publication, a workgroup of the International Alzheimer’s disease Research Funder Consortium, a group of over 30 research funding agencies from around the world, compiled the best practices and guidelines for Alzheimer’s disease-related studies, which are also highly relevant for most other preclinical biomedical research areas.
Importantly, these recommendations provide support for preclinical study design and differentiate between exploratory (pilot or early proof-of-concept studies) and therapeutic (confirmatory) studies, which should be designed and executed with the same rigor that is appropriate for human clinical trials. In addition, a third category is included, “mechanistic” experiments, which usually precede the identification of a compound or therapeutic agent used in exploratory and therapeutic studies.
The goal of this article was to provide guidelines to funding organizations, scientists applying for funding and for peer-review experts assessing applications related to preclinical research. Although challenges remain in implementing these guidelines globally, funding agencies (together with universities and journal editors) have the ability to provide incentives and rewards which can be used to positively influence reproducibility by following Good Research Practice standards.