Variability in the analysis of a single neuroimaging dataset by many teams

To test the reproducibility and robustness of results obtained in the neuroimaging field, 70 independent teams of neuroimaging experts from across the globe were asked to analyze and interpret the same functional magnetic resonance imaging dataset.
The authors found that no two teams chose identical workflows to analyse the data – a consequence of the degrees of freedom and flexibility around the best suited analytical approaches.
This flexibility resulted in sizeable variation in the results of hypothesis tests, even for teams whose statistical maps were highly correlated at intermediate stages of the analysis pipeline. Variation in reported results was related to several aspects of analysis methodology. Notably, a meta-analytical approach that aggregated information across teams yielded a significant consensus in activated regions. Furthermore, prediction markets of researchers in the field revealed an overestimation of the likelihood of significant findings, even by researchers with direct knowledge of the dataset. These findings show that analytical flexibility can have substantial effects on scientific conclusions, and identify factors that may be related to variability in the analysis of functional magnetic resonance imaging. The results emphasize the importance of validating and sharing complex analysis workflows and the need for experts in the field to come together and discuss what minimum reporting standards are.
The most straightforward way to combat such (unintentional) degrees of freedom is to have detailed data processing and analysis protocols as part of the study plans. As this example illustrates, such protocols need to be checked by independent scientists to make sure that they are complete and unequivocal. While the imaging field is complex and data analysis cannot be described in one sentence, the need to have sufficiently detailed study plans is also a message to pre-registration platforms that should not impose any restrictions on the amount of information being pre-registered.

LINK

Coronavirus: The need for solid, robust data

Like everyone else we are with a great concern given the current coronavirus situation and the beginning of a global experiment: What are the right measures to fight the virus and for how long should these measures continue if the pandemic churns across the globe unabated? How can policymakers tell if they are doing more good than harm? In this opinion piece and an interview, John P.A. Ioannidis highlights the need to generate robust, reliable data so that decision-making can be based on solid facts, profound knowledge and a universal understanding of the spread and the tight control of this dangerous virus.

Due to the coronavirus, the anti-malaria drug chloroquine receives a lot of attention and excitement these days. Given the urgent need to find a treatment, it is very frustrating that here again we read worrisome reports about the quality of evidence related to this potential treatment for the coronavirus disease.

Replication Study: The microRNA miR-34a inhibits prostate cancer stem cells and metastasis by directly repressing CD44

The Reproducibility Project: Cancer Biology aims to replicate a number of landmark studies in the field of tumor biology published between 2010 – 2012. The latest replication attempt was published on March 12thin eLIFE and focused on a publication from 2011 by Liu et al. The attempt to replicate the major findings was unsuccessfully. The publication by Yan et al. reported increased miR-34a expression in CD44+derived prostate cancer cells, whereas the original report showed decreased levels. Similar opposing results were obtained in respect to tumor growth: the replication study did not reveal changes in tumor size whereas the original study showed decreased growth after miR-34a overexpression. In line with these results, the mechanism from the original paper, the decreased expression of CD44 by miR-34a binding to a 3’UTR of CD44, could not be replicated .

Once again, this unbiased replication study shows the importance of replicating findings and increasing the internal as well as external validity before advancing a project and the start of resource-consuming translational programs.

Replication Study: Intestinal inflammation targets cancer-inducing activity of the microbiota

As part of the Reproducibility Project: Cancer Biology, this report describes the attempt to replicate critical experiments from the paper “Intestinal Inflammation Targets Cancer-Inducing Activity of the Microbiota” (Arthur et al., 2012). In agreement with the original study, the authors did not observe any changes on bacterial growth or colonization when the polyketide synthase (pks) genotoxic island was deleted from E. coli NC101.
However, for the in vivo studies using IL-10-deficient mice, the replication attempt showed increased mortality and severity of inflammation compared to the original study. This difference was likely due to an insufficient description of methodology in the original study. Additionally, early death occurred during azoxymethane treatment with higher mortality observed in NC101D pks mono-associated mice compared to NC101.
As a consequence, the data obtained were unable to address whether, under the conditions of the original study, NC101 and NC101D pks differ in inflammation, invasion, and neoplasia. Thus, this replication attempt highlights the need to clearly describe experimental methods to ensure accurate reproduction of experimental studies.

LINK

ECNP Preclinical Network Data Prize: The Award Winner

The ECNP Preclinical Network has announced the winner of the Award for the best publication of negative results in preclinical neuroscience. The Award was presented by Dr. Patricia Kabitzke (Cohen Veterans Bioscience, the main sponsor of the Award) during the ECNP Congress in Barcelona on October 7, 2018 to Prof. Tom Beckers (KU Leuven, Belgium) for the following paper:
Luyten L, Beckers T (2017) A preregistered, direct replication attempt of the retrieval-extinction effect in cued fear conditioning in rats. Neurobiology of Learning and Memory 144: 208–215.

Since it was established in 2014, one of the core goals of the ECNP Network Preclinical Data Forum is to facilitate sharing of unpublished information between scientists (in particular from industrial companies where a lot of obstacles exist). With this goal in mind, the evaluation committee has decided to award a Special Distinction to the paper written in collaboration by scientists from three pharma companies:
Latta-Mahieu M, Elmer B, Bretteville A, et al (2018) Systemic immune-checkpoint blockade with anti-PD1 antibodies does not alter cerebral amyloid-β burden in several amyloid transgenic mouse models. Glia. 2018 Mar;66(3):492-504.

Overall, this was a successful project and there is an ongoing discussion with the sponsor (Cohen Veterans Bioscience) about repeating the competition. Further, the experience generated during this project (most importantly, what makes a negative study worth sharing) will be summarized and published soon.

From left to right: Laurent Pradier (Sanofi), Patricia Kabitzke (Cohen Veterans Bioscience), Tom Beckers (KU Leuven), Thomas Steckler (ECNP Preclinical Data Forum co-chair, Janssen) and Anton Bespalov (ECNP Preclinical Data Forum co-chair, PAASP).

Social sciences: Replication studies show that 3 out of 5 is good

Evaluating the replicability of social science experiments in Nature and Science between 2010 and 2015
Being able to replicate scientific findings is crucial for scientific progress. The authors of this study published in Nature Human Behaviour tried to replicate 21 systematically selected experimental studies in the social sciences published in Nature and Science between 2010 and 2015. The replications followed analysis plans reviewed by the original authors and pre-registered prior to the replications. The replications were high powered, with sample sizes on average about five times higher than in the original studies.
In summary, the authors could successfully replicated 13 out of 21 findings (62%), and the effect size of the replications was on average about 50% of the original effect size.
Importantly, however, the authors also estimated peer beliefs about replicability using surveys and prediction market approaches: the prediction market beliefs and the survey beliefs were highly correlated and both are highly correlated with a successful replication – that means that peers were very effective at predicting future replication success.
Interestingly, the described successful prediction market approach is in contrast to other reports in the cancer biology area. In a recent publication, Daniel Benjamin and his colleagues analysed the ability of cancer researchers to judge whether selected preclinical reports can be reproduced or not. On average, researchers forecasted a 75% probability of replicating the statistical significance and a 50% probability of replicating the effect size, yet none of these studies successfully replicated on either criterion (for the 5 studies with results reported).
At this point, it is not clear which factors caused these different findings and future studies will tell whether there is indeed a difference between the sociology and the cancer biology studies when it comes to predicting successful replication attempts.