Behavioral scientists know very well how to train humans and animals to perform a desired response and how to avoid behaviors that are not needed.
Why do we then not ask behavioral scientists to help us develop strategies that will eliminate unwanted behaviors leading to research publications of low or uncertain quality and replace them with behaviors that promote research rigor and transparence?
Perhaps, we have not yet approached behavioral scientists because it is actually not clear to us what kind of behaviors we would like to promote. In this commentary, we attempt to analyze the impact of the publication guidelines and checklists.
ARRIVE guidelines are certainly well known to our readers and were among the first major efforts to introduce higher reporting standards in biomedical literature. Our audience is unfortunately not representative of the general research community and it has been discussed many times that, although published in 2010, the ARRIVE guidelines are not widely known and therefore not followed. This is in sharp contrast to the long list of journals that declared their support of the ARRIVE guidelines and are expected to promote adherence to these guidelines. Why do the journals’ declaration have no higher impact? There may be a number of reasons but we would like to cite a colleague, saying: “If I try to follow the guidelines, it is quite some effort but does not help me to publish my work”.
Let’s turn to another notable initiative – a checklist developed by the Nature Publishing Group for its journals dealing with manuscript submissions in life sciences. This checklist covers many items of the ARRIVE guidelines and even explicitly refers to them.
Four years after these checklists were introduced, their impact has been analyzed and we have highlighted the report by Malcolm Macleod and the NPQIP Collaborative group in one of our previous Newsletter issues: The outcome of this analysis was very clear – the number of NPG publications meeting all relevant Landis 4 criteria (randomisation, blinding, sample size calculation, exclusions) increased from 0/203 prior to May 2013 to 31/181 (16.4%) after May 2013 (in contrast, the proportion of non-NPG publications meeting all relevant Landis 4 criteria – 1/164 before, 1/189 after did not change). Overall, the authors identified a substantial improvement in the reporting of risks of bias in in vivo research in NPG journals following a change in editorial policy, to a level that has not been previously observed.
Encouraging? Certainly yes, but one needs to be aware of the “danger of normative responses, whereby scientists simply satisfy the guidelines (e.g. ARRIVE) at a time when it is too late to take corrective actions on experimental conduct (Vogt et al. 2016).”
When introduced, access to the checklists has been restricted to editors and reviewers and were not available to the readers once the manuscripts were accepted for publication. This practice did raise criticisms and we hope that we have contributed to the change in the editorial policy.
What can we learn from these published checklists? Information that the authors provide is quite diverse but the following example nicely illustrate the point we are trying to make in this commentary:
In one recent paper, when answering the question on how sample size was determined, the authors stated the following: “The sample size was determined based on preliminary results or similar experiments carried-out in the past. Power Analysis was performed using G-power in order to estimate the number of animals require, for a signal-to-noise ratio of 1.4 and 80% to 90% power assuming a 5% significance level.”
When asked to describe whether the experimental findings were reliably reproduced, the answer was: “The attempts at replication were successful”.
To conclude, there is currently no connection between desired behaviors (such as transparent reporting) and reinforcement (such as publications in higher IF journals). As a result, we may well observe other behaviors (other than those that are desired) unintentionally reinforced.
In the book “Don’t shoot the dog!”, Karen Pryor gave a remarkably entertaining and educating introduction into the field of behavioral training. Perhaps, indeed, we need to talk to behavioral scientists?