In the previous issue of the Newsletter, we have already highlighted the recent paper by Bernhard Voelkl et al. from University of Bern. This publication has apparently caught the attention of many of our colleagues and triggered diverse feedback. We would like to mention one particular commentary published by ‘DrugMonkey’.

All in all, this is a rather puzzling commentary that attempts to be or sound critical but, in reality, actually reinforces the conclusions made by Bernhard Voelkl and colleagues. Indeed, the commentary ends with the conclusion that the reproducibility crisis is essentially “a failure to generalize beyond the original experimental conditions”. However, this is exactly the main point made by Voelkl et al.!

What makes this commentary then worth mentioning?

First, there are a lot of published papers and analyses that have discussed various aspects of research data quality and associated problems. Apparently, many of these efforts failed to reach their goals and the message has not been understood. As the result, we read in one of the response to the DrugMonkey commentary (by qaz):
 “this whole “reproducibility crisis” is providing ammunition for the anti-science crowd to diminish the population’s trust in science”.
Certainly, we, at PAASP, and our colleagues have other intentions.

Would publishing more papers and analyses help? We dare to say that this will not much improve the situation because what is needed is a direct and open dialogue. When attending a recent conference on research data quality, one of us spoke with the organizer who complained that it proved to be impossible to get a speaker who would represent the view “all is good, continue as usual”. Why are these views best expressed in anonymous posts such as the ones on the DrugMonkey blog?

Second, we really like how one of the responses (again by qaz) described the current problem:
“The problem is that engineers (e.g. pharma) are trying to cross the river when someone threw one brick into the water. Wait until we’ve got a working bridge. Then we can cross the river.”

This is a very wise advice!  To make it work, we need to make sure that:
a) the bricks do not look like and are not “sold” as bridges (i.e. need to understand that a manuscript published in a high impact factor publication usually represents highly advanced science and technology but may nevertheless be nothing more than an exploratory study),

b) negative results are made publicly known and available as soon as possible (otherwise, only positive results will be known and will be used to build a bridge before negative results find a way to publicity).

The latter is critical. Without that, DrugMonkey & Co will have the full right to say: “Seriously, listen to what the scientists who are eager to be puppeted by Big Pharma have to say. Listen to their supposed examples that show “the problem is real”. Look at what makes them really mad.

With that, we would like to remind our readers of the Publication Award for Negative Results in preclinical neuroscience – still receiving nominations!