A recent report by Tiwari et al. investigated the reproducibility rate in systems biology modelling by reproducing the mathematical representation of 455 kinetic models. The authors tried to
1.) reproduce the published model (step 1),
2.) if failed, adjust their efforts based on experience (step 2),
3.) if failed again, contact the authors of the original study for clarification and support (step 3).
When attempting to reproduce the selected models based on the information provided in the primary literature (step 1), only 51% of the models could be reproduced, meaning that the remaining 49% needed additional efforts (i.e. via steps 2+3). However, 37% of the total articles could not be reproduced by Tiwari and colleagues at all, even when adjusting the model system or asking the authors of the original study for support.
Notably, over 70% of the corresponding authors did not respond when contacted by Tiwari et al and in half of the responses it was not possible to reproduce the model, even with the support of the authors.
This low reproducibility rate, in combination with the very low response rate of the original authors makes it absolutely necessary to have very good reporting standards in the original study and to have them checked by the peer reviewers.
To improve the situation for systems biology, Tiwari and colleagues provided specific reporting guidelines in form of a checklist with eight points to increase the reproducibility of systems biology modelling.
For knowledge to benefit research and society, it must be trustworthy. Trustworthy research is robust, rigorous, and transparent at all stages of design, execution, and reporting. However, assessment of researchers still rarely includes considerations related to trustworthiness, rigor, and transparency. Thus, and as part of the 6th World Conference on Research Integrity, the authors have developed the Hong Kong Principles (HKPs) with a specific focus on the need to drive research improvement through ensuring that researchers are explicitly recognized and rewarded for behaviours that strengthen research integrity. This article presents five principles: responsible research practices; transparent reporting; open science (open research); valuing a diversity of types of research; and recognizing all contributions to research and scholarly activity. For each principle, a rationale for its inclusion are provided as well as examples where these principles are already being adopted.
Related and linked to the Hong Kong Principles, a survey was conducted and published in 2016. Asking 1353 attendees of four past World Conferences of Research Integrity, the survey was aimed to score 60 research misbehaviours according to their views on and perceptions of the frequency of occurrence, preventability, impact on truth (validity), and impact on trust between scientists. As a result, importantly, the score values suggest that selective reporting, selective citing, and flaws in quality assurance and mentoring are viewed as the major problems of modern research. Respondents were much more concerned over sloppy science than about scientific fraud. Adequate supervision and mentorship, proper handling and storage of data, adequate record keeping and adherence to principles of quality assurance were identified as potential solutions (summarized in Table 3 below):
Staff at QED Biomedical Ltd have conducted research to demonstrate how sample bias in patient- derived tumour tissue samples with somatic mutations may affect current clinical research and precision medicine.
The research identified the following limitations: The review found no guidelines to aid systematic reviewers, clinical researchers, or decision makers in considering, critiquing, and understanding the impact of sample bias produced by tumour heterogeneity. General recommendations to deal with tumour heterogeneity included the use of multiple tumour samples per patient, selection of sample purity thresholds, specific sequencing techniques, liquid biopsy and microdissection. In molecular biomarker research pathology methods, pathology results and tumour purity are underreported, preventing evaluation of sample bias. In 58% of datasets it was unclear if the tissues under investigation were diagnosed directly or assumed from the patient’s pathology. Authors reporting on tissue samples derived from databases did not report any baseline pathology results or pathology methods.
There is a lag in knowledge between clinical decision makers and biomedical scientists with regards to how tumour heterogeneity can impact sample bias.
The reporting standards of tumour tissue samples needs to be improved.
Sample bias due to tumour heterogeneity hinders the identification of all somatic gene mutations in a tissue sample. This produces false negative results in tumours analysed for genetic or molecular biomarkers. Ultimately, the underreporting of genetic mutations prevents beneficial precision medical treatments reaching all possible patients.
Several initiatives have set out to increase transparency and internal validity of preclinical studies. While many of the points raised in these various guidelines are identical or similar, they differ in detail and rigour. Most of them focus on reporting, only few of them cover the planning and conduct of studies. The aim of this systematic review was to identify existing experimental design, conduct, analysis and reporting guidelines relating to preclinical animal research. Based on a systematic search in PubMed, Embase and Web of Science unique 58 recommendations were extracted. Amongst the most recommended items were sample size calculations, adequate statistical methods, concealed and randomised allocation of animals to treatment, blinded outcome assessment and recording of animal flow through the experiment. The authors highlight, that – although these recommendations are valuable – there is a striking lack of experimental evidence on their importance and relative effect on experiments and effect sizes. This work is part of the European Quality In Preclinical Data (EQIPD) consortium.
The reproducibility crisis triggered worldwide initiatives to improve rigor, reproducibility, and transparency in biomedical research. There are many examples of scientists, journals, and funding agencies adopting responsible research practices. The QUEST (Quality-Ethics-Open Science-Translation) Center offers a unique opportunity to examine the role of institutions. The Berlin Institute of Health founded QUEST to increase the likelihood that research conducted at this large academic medical center would be trustworthy, useful for scientists and society, and ethical. QUEST researchers perform “science of science” studies to understand problems with standard practices and develop targeted solutions. The staff work with institutional leadership and local scientists to incentivize and support responsible practices in research, funding, and hiring. Some activities described in this paper focus on the institution, whereas others may benefit the national and international scientific community. The experiences, approaches, and recommendations of the QUEST Center will be informative for faculty leadership, administrators, and researchers interested in improving scientific practice.
To improve the robustness and transparency of scientific reporting, the American Society for Pharmacology and Experimental Therapeutics (ASPET), with input from PAASP’s Martin Michel, T.J. Murphy and Harvey Motulsky, has updated the Instructions to Authors (ItA) for ASPET’s primary research journals: Drug Metabolism and Disposition, Journal of Pharmacology and Experimental Therapeutics, and Molecular Pharmacology. The revised ItA went into effect on January 1st 2020. Details and the underlying rationale are described in an editorial/tutorialthat appeared in all three journals. Key recommendations include the need to differentiate between pre-planned, hypothesis-testing on the one, and exploratory experiments on the other side; explanations of whether key elements of study design, such as sample size and choice of specific statistical tests, had been specified before any data were obtained or adapted thereafter; and explanations of whether any outliers (data points or entire experiments) were eliminated and when the rules for doing so had been defined.
Importantly, Molecular Pharmacology has established a dedicated review process for each manuscript received to check compliance with the new guidelines. This is in contrast to JPET and DMD, which do not have similar policies in place (yet).
It will be interesting to analyze the impact of Mol Pharmacol’s additional review of manuscript and guideline compliance after a certain period of time. Indeed, Anita Bandrowski and colleagues have recently shown that identifiability of research tools like antibodies was dramatically improved in journals like eLife and Cell since 2015/2016 compared to e.g. PLOS ONE. The reason identified was that both journals (eLife and Cell) not only changed their guidelines to make them more visible but also proactively enforced them. PLOS ONE also changed their ItA to improve how they describe research tools, but without the same level of active enforcement.
We do hope that the ASPET’s new instruction to authors will have a positive impact and will set an example for other journals and learned societies to follow. We also hope that the efforts by ASPET will further demonstrate the role of enforcing the guidelines and instructions.
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.