Academic criteria for promotion and tenure in biomedical sciences faculties: cross-sectional analysis of international sample of universities

Understanding the variability of criteria and thresholds for promotion and tenure applied across institutions requires a systematic empirical assessment. Therefore, the authors aimed to identify and document a set of pre-specified traditional (for example, number of publications) and non-traditional (for example, data sharing) criteria used to assess scientists for promotion and tenure within faculties of biomedical sciences among a large number of universities around the world. The study shows that the evaluation of scientists emphasises traditional criteria as opposed to non-traditional criteria. This may reinforce research practices that are known to be problematic while insufficiently supporting the conduct of better quality research and open science. The authors conclude that institutions should consider incentivising non-traditional criteria.

LINK

Reproducibility of animal research in light of biological variation

Context-dependent biological variation presents a unique challenge to the reproducibility of results in experimental animal research, because organisms’ responses to experimental treatments can vary with both genotype and environmental conditions. In contrast to the current gold standard of rigorous standardization in experimental animal research, the authors recommend the use of systematic heterogenization of study samples and conditions by actively incorporating biological variation into study design through diversifying study samples and conditions. In this article, the scientific rationale for this approach is provided so that researchers, regulators, funders and editors can understand this paradigm shift. In addition, a road map towards better practices in view of improving the reproducibility of animal research is presented.

Reproducibility of animal research in light of biological variation

Additional reads in May 2020

SuperPlots: Communicating reproducibility and variability in cell biology

Against pandemic research exceptionalism

Pseudoscience and COVID-19 — we’ve had enough already

Spin in Scientific Publications: A Frequent Detrimental Research Practice

Pseudoscience and COVID-19 — we’ve had enough already

Coronavirus in context: Scite.ai tracks positive and negative citations for COVID-19 literature

Pandemic researchers — recruit your own best critics

The importance of being second – PLOS-wide edition

Reproducibility in science: important or incremental?

Die Reproduzierbarkeitskrise: Bedrohung oder Chance für die Wissenschaft? (In German)

In this Editorial, published in Biologie in unserer Zeit, Martin C. Michel and Ralf Dahm discuss threats and opportunities related to the current reproducibility crisis in biomedical sciences.
The authors highlight several top-down approaches currently in place to increase data quality and reproducibility: the BMBF, the EU or the NIH have launched research programs on the topic of reproducibility; various specialist journals (e.g. Nature or Molecular Pharmacology) have adapted their guidelines for authors; and the DFG has published newguidelines for Good Scientific Practice and declared them binding for all DFG-funded scientists.
In addition, there is also an increasing number of bottom-up initiatives, such as the European Quality in Preclinical Data (EQIPD) project (https://quality-preclinical-data.eu/) or the Global Preclinical Data Forum (https://www.preclinicaldataforum.org). Such initiatives as well as professional organizations like the PAASP Network (e.g. www.paasp.net) offer solutions, advice and training to promote preclinical data quality.

LINK

Systematic review of guidelines for internal validity in the design, conduct and analysis of preclinical biomedical experiments involving laboratory animals

Several initiatives have set out to increase transparency and internal validity of preclinical studies. While many of the points raised in these various guidelines are identical or similar, they differ in detail and rigour. Most of them focus on reporting, only few of them cover the planning and conduct of studies.
The aim of this systematic review was to identify existing experimental design, conduct, analysis and reporting guidelines relating to preclinical animal research. Based on a systematic search in PubMed, Embase and Web of Science unique 58 recommendations were extracted. Amongst the most recommended items were sample size calculations, adequate statistical methods, concealed and randomised allocation of animals to treatment, blinded outcome assessment and recording of animal flow through the experiment.
The authors highlight, that – although these recommendations are valuable – there is a striking lack of experimental evidence on their importance and relative effect on experiments and effect sizes.
This work is part of the European Quality In Preclinical Data (EQIPD) consortium.

LINK

Variability in the analysis of a single neuroimaging dataset by many teams

To test the reproducibility and robustness of results obtained in the neuroimaging field, 70 independent teams of neuroimaging experts from across the globe were asked to analyze and interpret the same functional magnetic resonance imaging dataset.
The authors found that no two teams chose identical workflows to analyse the data – a consequence of the degrees of freedom and flexibility around the best suited analytical approaches.
This flexibility resulted in sizeable variation in the results of hypothesis tests, even for teams whose statistical maps were highly correlated at intermediate stages of the analysis pipeline. Variation in reported results was related to several aspects of analysis methodology. Notably, a meta-analytical approach that aggregated information across teams yielded a significant consensus in activated regions. Furthermore, prediction markets of researchers in the field revealed an overestimation of the likelihood of significant findings, even by researchers with direct knowledge of the dataset. These findings show that analytical flexibility can have substantial effects on scientific conclusions, and identify factors that may be related to variability in the analysis of functional magnetic resonance imaging. The results emphasize the importance of validating and sharing complex analysis workflows and the need for experts in the field to come together and discuss what minimum reporting standards are.
The most straightforward way to combat such (unintentional) degrees of freedom is to have detailed data processing and analysis protocols as part of the study plans. As this example illustrates, such protocols need to be checked by independent scientists to make sure that they are complete and unequivocal. While the imaging field is complex and data analysis cannot be described in one sentence, the need to have sufficiently detailed study plans is also a message to pre-registration platforms that should not impose any restrictions on the amount of information being pre-registered.

LINK