Reproducibility in systems biology modelling

A recent report by Tiwari et al. investigated the reproducibility rate in systems biology modelling by reproducing the mathematical representation of 455 kinetic models. The authors tried to 

1.) reproduce the published model (step 1), 

2.) if failed, adjust their efforts based on experience (step 2), 

3.) if failed again, contact the authors of the original study for clarification and support (step 3).

When attempting to reproduce the selected models based on the information provided in the primary literature (step 1), only 51% of the models could be reproduced, meaning that the remaining 49% needed additional efforts (i.e. via steps 2+3). However, 37% of the total articles could not be reproduced by Tiwari and colleagues at all, even when adjusting the model system or asking the authors of the original study for support.

Notably, over 70% of the corresponding authors did not respond when contacted by Tiwari et al and in half of the responses it was not possible to reproduce the model, even with the support of the authors.

This low reproducibility rate, in combination with the very low response rate of the original authors makes it absolutely necessary to have very good reporting standards in the original study and to have them checked by the peer reviewers.

To improve the situation for systems biology, Tiwari and colleagues provided specific reporting guidelines in form of a checklist with eight points to increase the reproducibility of systems biology modelling.

Risk‐of‐bias VISualization (robvis): An R package and Shiny web app for visualizing risk‐of‐bias assessments

There is currently no generic tool for producing figures to display and explore the risk‐of‐bias assessments that routinely take place as part of systematic review.
In this article, the authors, therefore, present a new tool, robvis (Risk‐Of‐Bias VISualization), available as an R package and web app, which facilitates rapid production of publication‐quality risk‐of‐bias assessment figures. A timeline of the tool’s development and its key functionality is also presented.


Additional reads in August 2020

Five better ways to assess science

$48 Billion Is Lost to Avoidable Experiment Expenditure Every Year

Opinion: Surgisphere Fiasco Highlights Need for Proper QA

Assuring research integrity during a pandemic

Reproducibility in Cancer Biology: Pseudogenes, RNAs and new reproducibility norms

The Problems With Science Journals Trying to Be Gatekeepers – and Some Solutions

Replications do not fail

Sex matters in neuroscience and neuropsychopharmacology

Journals endorse new checklist to clean up sloppy animal research

Paying it forward – publishing your research reproducibly

The best time to argue about what a replication means? Before you do it

How scientists can stop fooling themselves over statistics

Choice of y-axis can mislead readers

Biases at the level of study design, conduct, data analysis and reporting have been recognized as major contributing factors to poor reproducibility. Authors from Turkey, UK and Germany (including PAASP partner Martin C. Michel) now add another type of bias to the growing list of biases, “perception bias”. Based on examples from the non-scientific literature they illustrate how data presentation can be technically correct but create biased perceptions by choices related to the unit of measure or scaling of the y-axis or graphs. For instance, one study outcome can lead to three entirely different interpretations depending on the choice of denominator used for normalization. The authors suggest that scientists should carefully consider whether their choice of graphical data representation may create perceptions that steer readers towards one of several possible interpretations rather than allowing them a neutral evaluation of the findings.


Using Bayes factor hypothesis testing in neuroscience to establish evidence of absence

For neuroscience and brain research to progress, it is critical to differentiate between experimental manipulations which have no effect and those that do have an effect. The dominant statistical approaches used in neuroscience rely on p-values and can establish the latter but not the former. This makes non-significant findings difficult to interpret: do they support the null hypothesis or are they simply not informative? In this article, the authors show how Bayesian hypothesis testing can be used in neuroscience studies to establish both whether there is evidence of absence and whether there is absence of evidence. Through simple tutorial-style examples of Bayesian t-tests and ANOVA using the open-source project JASP, this article aims to empower neuroscientists to use this approach to provide compelling and rigorous evidence for the absence of an effect.


Advancing science or advancing careers? Researchers’ opinions on success indicators

The way in which we assess researchers has been under the radar in the past few years. Critics argue that current research assessments focus on productivity and that they increase unhealthy pressures on scientists. Yet, the precise ways in which assessments should change is still open for debate. In this article, the authors designed a survey to capture the perspectives of different interest groups like research institutions, funders, publishers, researchers and students.
When looking at success indicators, the authors found that indicators related to openness, transparency, quality, and innovation were perceived as highly important in advancing science, but as relatively overlooked in career advancement. Conversely, indicators which denoted of prestige and competition were generally rated as important to career advancement, but irrelevant or even detrimental in advancing science. The authors concluded that, before we change the way in which researchers are being assessed, supporting infrastructures must be put in place to ensure that researchers are able to commit to the activities that may benefit the advancement of science.