The reproducibility crisis in science is a multifaceted problem involving practices and incentives, both in the laboratory and in publication.
Using SciScore, an automated tool developed to review the methods sections of manuscripts for the presence of criteria associated with the NIH and other reporting guidelines (e.g. ARRIVE and RRIDs) the authors have analyzed ~1.6 million PubMed Central papers to determine the degree to which articles were addressing certain quality criteria like blinding, randomization, power calculation or cell line authentication. The tool scores each paper on a ten-point scale identifying sentences that are associated with compliance with criteria associated with increased rigor (5 pts) and those associated with key resource identification and authentication (5 pts). From these data, the authors have built the Rigor and Transparency Index, which is the average score for analyzed papers in a particular journal.
The analyses showed that the average score over all journals has increased since 1997, but remains below five, indicating that less than half of the rigor and reproducibility criteria are routinely addressed by scientists.
However, it cannot be expected that all quality criteria are equally important for each experiment or should be met by every study. Thus, it is important to note that the question whether a study should address certain criteria is currently too difficult to answer using the SciScore tool available.
0 Comments
Leave A Comment