In the current hyper-competitive research environment, scientists feel the constant need to publish novel findings as often and as fast as possible, as career progression and attracting funding are mainly associated with publication records. Reporting positive results over negative or inconclusive outcomes is often encouraged, which places an emphasis on novelty and comes at the expense of rigor and robust methods. Ultimately, the number of publications in high impact factor journals has become the “currency” to assess scientific productivity and performance.
However, citation-based metric systems are a weak substitute for actually reading of a paper in determining its quality. Given the huge volume of papers being published every day, it is obvious why imperfect metrics thrive, as researchers can be compared instantly.
In a recent article published in PLOS One, the authors propose to push back against the pressure to “publish or perish” by introducing random audits and assessing a small proportion of publications in detail. The auditors could examine complex measures of quality that “instant” metrics will never capture, such as Good Research Practice, Reproducibility, and Data Robustness.
This is not a new idea and audits within life sciences have been suggested by Adil E. Shamoo and others over 30 years ago. In fact, similar approaches are used to discourage drunk driving and tax evasion. Of course, people still drink and perform various tricks to avoid paying taxes, but there is good evidence showing that the risk of being audited by the tax authorities significantly lowered the tax code violations and that the introduction of random breath tests greatly reduced fatalities.
To examine the potential of random audits, Barnett et al. used an existing simulation of the research world in which simple Darwinian principles showed how paper numbers quickly trump quality when researchers are rewarded based purely on how many papers they produce. The authors then added random audits that were able to spot researchers producing poor-quality work and remove them. Importantly the audits also improved behavior in the wider research community, because researchers whose colleagues were audited were prompted to produce fewer papers but of higher quality.
Further, this analysis demonstrated that the spiral of competition to produce ever greater paper numbers could be avoided by auditing around 2% of all published work. For research funded by the US National Institutes of Health, it was estimated that this would cost around USD $16 million per year. Given that approximately USD $28 billion per year is spent in the United States alone on preclinical research that is not reproducible (Freedman et al. 2015), this is a fair investment to maintain research quality that would likely pay for itself many times over.
Auditing is likely the most effective and fast tool to enhance quality of the reported data. As it can essentially be reduced to tracing the origin and existence of data that are reported in the paper, it does not have to be stressful or time-consuming. Yet, there are several obstacles that may prevent these ideas to be widely accepted:
First, many researchers raise valid concerns about who the auditors should be and the power they would yield. Should these be peers? Or, indeed, third-party organizations like PAASP? Universities may have policies that prevent access to data. For example, Washington University (St. Loius, MO, USA) has a policy according to which, scientists “are not allowed to freely distribute their scientific data to any third party organization” (Shin-ichiro Imai, personal communication to A.B., March 27, 2017).
Second, there may be organizations actually suffering from the auditing. For example, for most publishers, the business model critically depends on the submission numbers and it does not require much imagination to forecast that the introduction of a new auditing system will reduce the number of submissions received, at least for the near future. Indeed, we (PAASP Team members) have approach several publishers / journals offering to run a pilot experiment whereby journals would update the instructions for authors to indicate that, should the manuscript be accepted for publication, it may be a subject to a random check. However, even journals with very high submission rates are not prepared to take the risk.
Thus, Barnett and colleagues presented one of the most effective solution to current issues related to research data quality but we do not seem to be ready yet to implement it and to tolerate the consequences. The only area where auditing is accepted and is effectively used is an area of non-academic research where research data is a commodity and where buyers of data sets face direct financial consequences in case the quality of an acquired asset is unsatisfactory.
0 Comments
Leave A Comment