Anton Bespalov (PAASP)
Adrian G. Barnett (Queensland University of Technology, Brisbane, Australia)
C. Glenn Begley (BioCurate, Melbourne, Australia)
*For a condensed version of this commentary, please visit the Nature website.
Many of the signals of concern about the current research system originate from industry and are not always positively received by the academic scientific community.
In this commentary, we suggest why the pharmaceutical and biotech industry may be particularly sensitive to the issue of research quality. We emphasize that none of what is discussed should be taken as a suggestion for existing differences in quality of research in industry versus academia (one way or another). Further, we believe that, in the past, there may have been flaws in both academic and industry systems. Indeed, high research quality is important for both industry and academia, and corresponding efforts to increase quality are ongoing in both areas as well as in various private-public partnerships (e.g. IMI-supported consortium EQIPD).
Our aim is to reveal a paradoxically differential impact of high research rigor standards dependent on the research objectives. We believe that, if left unattended, this paradox will significantly endanger future communication between industry and academia.
Normative responses to guidelines and scientific publishing
Biomedical research, and drug discovery in particular, have not yet found a way to avoid animal experimentation. The strong need to adhere to the highest ethical standards in animal research has resulted in several important initiatives related to reporting of the results of animal experiments.
For example, Nature Publishing Group (NPG) has developed and introduced a checklist for its journals dealing with manuscript submissions in life sciences. Four years after the checklist was made mandatory, its impact has been analyzed by Malcolm Macleod and the NPQIP Collaborative group (Macleod et al., 2017). The outcome of this analysis was very clear – the number of NPG publications that provide information on randomisation, blinding, sample size calculation, and exclusions (so called Landis 4 criteria) increased from 0 out of 203 prior to May 2013 to 31 out of 181 (16%) after May 2013. In contrast, the proportion of non-NPG publications that met all Landis 4 criteria has not changed (1 out of 164 before and 1 out of 189 – after May 2013). Overall, this analysis has identified a substantial improvement in the reporting of risks of bias in in vivo research in the NPG journals following a change in editorial policy.
It is certainly encouraging, but one needs to be aware of the “danger of normative responses, whereby scientists simply satisfy the guidelines… at a time when it is too late to take corrective actions on experimental conduct” (Vogt et al., 2016). Indeed, in response to a request to describe whether the experimental findings were reliably reproduced, some scientists state: “The attempts at replication were successful”. 
Obviously, this kind of responses is not what NPG and the readers expect to see, but such responses nevertheless allow researchers to tick the box in the checklist to meet the formal requirements. In other words, there is no connection between desired behaviors (such as transparent reporting of important study design information) and reward (such as the publication in respected journals). As a result, we may well observe other behaviors (other than those that are desired) being unintentionally reinforced.
Feedback control
Machines and mechanisms around us are often controlled through feedback loops. Behaviour may also be efficiently controlled by its consequences. The probability of a desired behaviour can be increased when an appetitive stimulus or withdrawal of an aversive stimulus is made contingent upon this behaviour (positive and negative reinforcement, respectively). And the probability of an undesired behaviour decreases when this behaviour is followed by an aversive stimulus or withdrawal of an appetitive stimulus (i.e. punishment).
These behavioural reinforcement processes are mostly studied in experimental psychology laboratories, but the reinforcement principles, be it a carrot or a stick, operate in real life as well. 
Why not then apply the same principles to scientific publishing? We (AB) approached three major publishing houses and asked them to consider the following experiment. We suggested identifying a journal that receives a large number of submissions and that would agree to modify the Instructions for Authors by requesting authors’ consent to allow a potential assessment of their laboratory notebooks if the manuscript is accepted and published. This proposal has not been accepted by any publisher and the reason was not the cost of the assessments. Despite the low probability of a paper being subjected to an assessment (which could be 1 in 1,000), explicit reinforcement contingencies were thought to endanger submission numbers, which could put the publishing business model at risk.
The lack of interest in our experiment was not surprising, because feedback control, even under the conditions of the so-called partial reinforcement, can be very powerful. One well-known example is tax systems, where not every tax return report gets audited but the possibility of being audited is nevertheless sufficiently high to keep most (certainly not all) taxpayers follow the law.
It has been suggested that the research system could adopt the tax-system model and, if so, auditing could be an efficient tool to trigger positive changes to research practices. A recent simulation study has demonstrated that auditing may not only have a positive impact, but may also be an affordable policy option (Barnett et al., 2018).
The feedback control may certainly be an effective means against the undesired behaviours such as normative responses to existing and emerging standards in reporting from animal experimentation. However, a broader implementation of such practices depends on a balance between (perceived) positive and negative consequences.
Recognizing the benefits of higher research rigor standards
Consequences of introducing feedback control in biomedical research such as auditing may be perceived by scientists as detrimental. For example, the introduction of routine, random, data-auditing may result in fewer publications; generation of more null (“negative”) studies; greater time needed to complete projects or studentships; and some laboratories and institutions may even become less competitive. In other words, greater research rigor may not translate into better academic career opportunities, better funding and greater peer recognition. In fact, it could have the opposite effect.
This is likely why efforts such as the ARRIVE guidelines, for which a number of journals have declared support, are not followed and do not have the intended impact (Hair et al., 2018). As one of our colleagues explained: “If I try to follow the guidelines, it is quite some effort but does not help me to publish my work”. This comment illustrates the complexity of competitive reinforcement contingencies and the balance that is today not always in favor of research rigor (Figure 1).

Most scientists aim to do good research and need to be supported in getting rid of the constraints that prevent them from following the best practices. Paradoxically, industry scientists may be more open and willing to talk about increased research rigor because, for them, success metrics are not based on the number of publications or journal impact factor. Indeed, if industry introduces and maintains higher quality standards it will help rather than prevent reaching the ultimate objectives (Figure 2).

We, as the scientific community, should look for opportunities to provide the same freedom to the academic researchers – by revising the publication system, by educating funders, by testing various schemes of feedback control (from alternative success metrics to random auditing and financial incentives proposed by Rosenblatt, 2016).
This proposal is not about the wisdom of industry trying to teach academia. It is about the yardstick by which applied biomedical research is measured and a responsibility to align current and future research practices with the long-term objectives of research.