Reproducibility Crisis: Are We Ignoring Reaction Norms? In this letter, published in Trends in Pharmacological Sciences, Bernhard Voelkl and Hanno Würbel discuss the importance of phenotypic plasticity in experimental design and analysis. Generally, phenotypic plasticity is the capacity of a single genotype to exhibit variable phenotypes in different environments. Due to this variability, results should be expected to differ for a certain degree whenever an in vivo experiment is replicated.
Given that between-experiment variation in the measured parameter can be substantial, phenotypic plasticity (or reaction norm) should be considered as a potential source of poor reproducibility. Furthermore, because many environmental factors cannot be normalized by research standards, increasingly rigorous standardization will consequently further increase the difference between laboratory-specific parameters resulting in even lower reproducibility – an effect known as the ‘standardization fallacy’.
Last Week Tonight with John Oliver: Scientific Studies. On Last Week Tonight, an American late-night talk and news satire television program, comedian John Oliver discussed how and why media outlets so often report untrue or incomplete information as scientifically proven facts, and even commented on the ‘reproducibility crisis’ in life sciences. The problem, Oliver said in this smart breakdown of the issue, is that media often blows findings out of proportion. Reporters very rarely go through a study’s methodology or explain the caveats, such as a small sample size. However, if you actually prefer science with a little less – well, science – Oliver has a solution for that too – watch more
The Experiment Factory: Standardizing Behavioral Experiments. Vanessa V. Sochat and colleagues from Stanford University have presented the Experiment Factory, a modular infrastructure that applies a collaborative, open source framework to the development and deployment of psychology web-based experiments. Psychology is one of the fields of science that has been affected by the so called ‘reproducibility’ crisis (Open Science Collaboration (2015), Estimating the reproducibility of psychological science), and it is argued that reproducible research in behavioral psychology is conditional on deployment of equivalent experiments. A large, accessible repository of experiments for researchers to develop collaboratively is most efficiently accomplished through an open source framework. This paper describes the modular infrastructure of the Experiment Factory – experiments, virtual machines for local or cloud deployment, and an application to drive these components and provide developers with functions and tools for further extension.
1,500 scientists lift the lid on reproducibility. According to the survey conducted by Nature, more than 52% of 1,576 researchers representing different areas of science see a ‘significant reproducibility crisis’. As acknowledged by the author of this report, Monya Baker, ‘the survey – which was e-mailed to Nature readers and advertised on affiliated websites and social-media outlets as being ‘about reproducibility’ – probably selected for respondents who are more receptive to and aware of concerns about reproducibility’. Nevertheless, it seems to provide important information especially in terms of the factors that may contribute to this irreproducibility and corresponding solutions. Interestingly, amongst the 11 approaches to improving reproducibility in science, the most endorsed categories were ‘More robust experimental design’, ‘better statistics’ and ‘better mentorship’ while the lowest ranked item was ‘journal checklists’.
What does research reproducibility mean? In this article published in Science Translational Medicine, the authors Steven N. Goodman, Daniele Fanelli and John P. A. Ioannidis discuss the issue that the terms reproducibility, replicability, reliability, robustness, and generalizability are not standardized and thus, many different definitions exist. As a consequence, this ‘has led to confusion, both conceptual and operational, about what kind of confirmation is needed to trust a given scientific result’. Finally, the authors offer working solutions to improve both communication and understanding.