by Plinio Cabrera Casarotto, PhD,
Editor-in-Chief JRepNeurosci
The replicability of results is a cornerstone of science. We can only rely on the outcomes of our scientific discoveries because they can be repeated, no matter where or by whom. However, especially but not exclusively, in academia unreplicable results are not a rare phenomena, and we started to acknowledge this problem when the latest reproducibility issues in the field of Psychology and Cancer Biology gained attention in the media. But why is this situation so common? Our current system of rewards is set in a manner that ‘novelty pays’. We, scientists, are rewarded in the form of grants, fame and respect if we publish high-impact discoveries in high-impact journals. Institutions, in order to keep their reputation, and the money flowing in, want those scientists publishing well; and journals, to keep their impact factors, and flow of articles and subscriptions or fees, demand ever more novelty. This feeds a cycle of fast-paced, ill-curated results that unsurprisingly reflects in poor replicability, and therefore reliability of the scientific discoveries. We may not be aware of the impacts, but if kept unchecked this cycle may erode the trust in science. Afterall, what is the point of spending time on something that should be reliable but precisely fails at this point? However, another feature of science is that it self-corrects, or better: it allows us to correct previous imprecisions, misconceptions or mistakes. In this sense, we can use the same weapons to address potential mistakes in our discoveries, but communicating and discussing them is a whole different ball game.
As we saw, journals are an integral part of the cycle that perpetuates the fast-pacing communication of ill-curated results, thus it won’t be of the immediate interest of the companies profiting from this cycle to break it. In this scenario, we need initiatives that provide a way for scientists interested in testing the reliability of our discoveries to discuss their results. This idea led us to set up the Journal for Reproducibility in Neuroscience – JRepNeurosci (ISSN: 2670-3815). In this journal we peer-review and publish well-conducted attempts to replicate whole studies or single experiments in diverse fields of Neuroscience, from biochemistry to fMRI and behavioral studies. We are operated by a board of researchers at different stages of their career, from senior scientists to PhD students, who keep their regular duties in the lab/office but recognize the need for such initiative, and we are hosted and maintained by the Helsinki University Library. As a journal, we want to detach ourselves from any commercial interests, thus we do not charge any fees from the authors or the readers; we solely focus on the proper execution and analysis of the submitted material, not its novelty or potential impact; we demand openly accessible data respecting the FAIR principles (Findable, Accessible, Interoperable, Reusable), the authors retain the copyrights of the studies and, for articles, the reviewers comments are published along with the accepted manuscripts. So we propose a slow-paced, well-curated approach to scientific publishing.
To this point, we were able to publish our first volume in August 2020, including two commentaries, one mini-review and two articles along with an editorial presenting the journal. In January 2021, we opened the second edition with three commentaries about current issues regarding reproducibility and how to tackle them from different perspectives, for example how librarians could help in the process. We had a very good reception on Twitter with more than 7000 interactions in the first week, and we have more than 2500 accesses to the material in our journal.
JRepNeurosci is a perfect venue for Masters and PhD students to publish their attempts to reproduce published experiments in their field, as well as for companies to publish the results of positive controls within their drug development pipelines. Our long-term goal is to become a platform where scientists can obtain information about the replicability of models and tests they want to implement, and at the same time reward scientists who contributed to the reproducibility and reliability of science in the form of articles that are necessary to advance their careers.
0 Comments
Leave A Comment