Additional reads in August 2020

Five better ways to assess science

$48 Billion Is Lost to Avoidable Experiment Expenditure Every Year

Opinion: Surgisphere Fiasco Highlights Need for Proper QA

Assuring research integrity during a pandemic

Reproducibility in Cancer Biology: Pseudogenes, RNAs and new reproducibility norms

The Problems With Science Journals Trying to Be Gatekeepers – and Some Solutions

Replications do not fail

Sex matters in neuroscience and neuropsychopharmacology

Journals endorse new checklist to clean up sloppy animal research

Paying it forward – publishing your research reproducibly

The best time to argue about what a replication means? Before you do it

How scientists can stop fooling themselves over statistics

Choice of y-axis can mislead readers

Biases at the level of study design, conduct, data analysis and reporting have been recognized as major contributing factors to poor reproducibility. Authors from Turkey, UK and Germany (including PAASP partner Martin C. Michel) now add another type of bias to the growing list of biases, “perception bias”. Based on examples from the non-scientific literature they illustrate how data presentation can be technically correct but create biased perceptions by choices related to the unit of measure or scaling of the y-axis or graphs. For instance, one study outcome can lead to three entirely different interpretations depending on the choice of denominator used for normalization. The authors suggest that scientists should carefully consider whether their choice of graphical data representation may create perceptions that steer readers towards one of several possible interpretations rather than allowing them a neutral evaluation of the findings.

LINK

Using Bayes factor hypothesis testing in neuroscience to establish evidence of absence

For neuroscience and brain research to progress, it is critical to differentiate between experimental manipulations which have no effect and those that do have an effect. The dominant statistical approaches used in neuroscience rely on p-values and can establish the latter but not the former. This makes non-significant findings difficult to interpret: do they support the null hypothesis or are they simply not informative? In this article, the authors show how Bayesian hypothesis testing can be used in neuroscience studies to establish both whether there is evidence of absence and whether there is absence of evidence. Through simple tutorial-style examples of Bayesian t-tests and ANOVA using the open-source project JASP, this article aims to empower neuroscientists to use this approach to provide compelling and rigorous evidence for the absence of an effect.

LINK

Advancing science or advancing careers? Researchers’ opinions on success indicators

The way in which we assess researchers has been under the radar in the past few years. Critics argue that current research assessments focus on productivity and that they increase unhealthy pressures on scientists. Yet, the precise ways in which assessments should change is still open for debate. In this article, the authors designed a survey to capture the perspectives of different interest groups like research institutions, funders, publishers, researchers and students.
When looking at success indicators, the authors found that indicators related to openness, transparency, quality, and innovation were perceived as highly important in advancing science, but as relatively overlooked in career advancement. Conversely, indicators which denoted of prestige and competition were generally rated as important to career advancement, but irrelevant or even detrimental in advancing science. The authors concluded that, before we change the way in which researchers are being assessed, supporting infrastructures must be put in place to ensure that researchers are able to commit to the activities that may benefit the advancement of science.

LINK

Academic criteria for promotion and tenure in biomedical sciences faculties: cross-sectional analysis of international sample of universities

Understanding the variability of criteria and thresholds for promotion and tenure applied across institutions requires a systematic empirical assessment. Therefore, the authors aimed to identify and document a set of pre-specified traditional (for example, number of publications) and non-traditional (for example, data sharing) criteria used to assess scientists for promotion and tenure within faculties of biomedical sciences among a large number of universities around the world. The study shows that the evaluation of scientists emphasises traditional criteria as opposed to non-traditional criteria. This may reinforce research practices that are known to be problematic while insufficiently supporting the conduct of better quality research and open science. The authors conclude that institutions should consider incentivising non-traditional criteria.

LINK

Reproducibility of animal research in light of biological variation

Context-dependent biological variation presents a unique challenge to the reproducibility of results in experimental animal research, because organisms’ responses to experimental treatments can vary with both genotype and environmental conditions. In contrast to the current gold standard of rigorous standardization in experimental animal research, the authors recommend the use of systematic heterogenization of study samples and conditions by actively incorporating biological variation into study design through diversifying study samples and conditions. In this article, the scientific rationale for this approach is provided so that researchers, regulators, funders and editors can understand this paradigm shift. In addition, a road map towards better practices in view of improving the reproducibility of animal research is presented.

Reproducibility of animal research in light of biological variation