New Author Guidelines for Displaying Data and Reporting Data Analysis and Statistical Methods in Experimental Biology

To improve the robustness and transparency of scientific reporting, the American Society for Pharmacology and Experimental Therapeutics (ASPET), with input from PAASP’s Martin Michel, T.J. Murphy and Harvey Motulsky, has updated the Instructions to Authors (ItA) for ASPET’s primary research journals: Drug Metabolism and DispositionJournal of Pharmacology and Experimental Therapeutics, and Molecular Pharmacology. The revised ItA went into effect on January 1st 2020. Details and the underlying rationale are described in an editorial/tutorialthat appeared in all three journals.
Key recommendations include the need to differentiate between pre-planned, hypothesis-testing on the one, and exploratory experiments on the other side; explanations of whether key elements of study design, such as sample size and choice of specific statistical tests, had been specified before any data were obtained or adapted thereafter; and explanations of whether any outliers (data points or entire experiments) were eliminated and when the rules for doing so had been defined.
 
Importantly, Molecular Pharmacology has established a dedicated review process for each manuscript received to check compliance with the new guidelines. This is in contrast to JPET and DMD, which do not have similar policies in place (yet).
 
It will be interesting to analyze the impact of Mol Pharmacol’s additional review of manuscript and guideline compliance after a certain period of time.
Indeed, Anita Bandrowski and colleagues have recently shown that identifiability of research tools like antibodies was dramatically improved in journals like eLife and Cell since 2015/2016 compared to e.g. PLOS ONE. The reason identified was that both journals (eLife and Cell) not only changed their guidelines to make them more visible but also proactively enforced them. PLOS ONE also changed their ItA to improve how they describe research tools, but without the same level of active enforcement.
 
We do hope that the ASPET’s new instruction to authors will have a positive impact and will set an example for other journals and learned societies to follow. We also hope that the efforts by ASPET will further demonstrate the role of enforcing the guidelines and instructions.

ARRIVE 2.0

A long-awaited revision of the ARRIVE guidelines has finally been published (The ARRIVE guidelines 2019).What impact can we expect from ARRIVE 2.0? At the very least, we hope that the Nature journals will update their life sciences checklist, which will certainly have an impact. As an example, let’s look at the recently published paper that reported on the impact of gut microbiome on motor function and survival in the ALS mouse model.We picked this example because: i) it is recent, and ii) it uses the same SOD1 mouse model that has become a classical example of how adherence to higher research rigor standards turns “positive” data into “negative”.We would not use this example if the only finding was about changes in motor performance of the SOD1 mice exposed to long-term antibiotic cocktail – this is not be too surprising as antibiotics may penetrate into the CNS and may have effects unrelated to their “class” effects. And the survival data in the germ-free animals also do not make us focus on this paper because there were obvious problems with the rederivation itself (page 2, left column) and only insufficient information on study design is given (e.g. no details on surgery; no information on whether the colonized animals were also obtained via rederivation).The most striking are actually the data in Figure 3 “Akkermansia muciniphila colonization ameliorates motor degeneration and increases life-span in SOD1-Tg mice”.How would ARRIVE 2.0 help the reader to gain confidence in these data sets? In the Table below, we review the responses provided by the authors in the Life Sciences checklist and, stimulated by ARRIVE 2.0, indicate what information is missing to increase confident in these study results:
Published Life Sciences checklistWhat we would like to see
All in vivo experiments were randomized and controlled for gender, weight and cage effects.What methods were used for randomization and how can these methods explain highly unequal sample sizes within an experiment?
Sample sizes were determined based on previous studies and by the gold standards accepted in the field.A reference to the gold standards would be very helpful. The only gold standard in this field we are aware of – Scott et al. 2008 – would certainly not recommend using n = 5.
In all in vivo experiments the researches were blinded.This statement is insufficient to know whether blinding was maintained until data analysis.
No data were excluded from the manuscript.The experiment in Fig. 3 was repeated 6 (!) times with sample sizes between 5 and 26 resulting in the pooled sample sizes of up to 62 mice per group. However, survival data are presented only for 4-8 mice per group. It would be interesting to see survival data from the main pool of animals unless the main experiment was stopped at day 140.
All attempts of replication were successful and individual repeats are presented in ED and SI sections.Given that each of the “replication” experiments was severely underpowered, one may wonder whether these were indeed independent experiments or parts of a single study erroneously presented as “attempts of replication”.
We realize that ARRIVE 2.0 may not be sufficient to obtain answers to all of the above questions but this is certainly a major step forward that should be rigorously endorsed and promoted.

Updated Arrive guidelines 2019

A long awaited revision of the ARRIVE guidelines has finally been published. There are three main reasons why the revision was badly needed and comes at the right time. 
First, the list of recommendations has been re-worked to identify the Essential 10 – i.e. those recommendations that should receive most attention. This is not to say that abstract or declaration of conflict of interest are not important but the enhanced focus on study design and analysis will make it easier for scientists to understand what the ARRIVE guidelines are for and why they cannot be ignored.
Second, for the original ARRIVE guidelines (as well as the Nature life sciences checklist that is related), it has always been unclear how scientists should respond and what information should be provided. Now, for the Essential 10, a companion manuscript provides detailed information with examples.
Third, the original ARRIVE guidelines have been published almost 10 years ago. Various analyses have shown that the guidelines may be known but are certainly not followed. With the ARRIVE 2.0, we, as the community, have a possibility to re-start the awareness campaign and engage synergistic efforts (e.g. EQIPD) to make sure that the ARRIVE 2.0 are also followed.

LINK

The ARRIVE guidelines published in 2010 https://test2.paasp.net/wp-content/plugins/arrive-guidelines/ are now updated.

Unambiguous identification of antibodies, cell lines and organisms around the globe: A call for the Research Resource ID

Unambiguous identification of antibodies, cell lines and organisms around the globe: A call for the Research Resource ID
The reproducibility crisis affecting different research areas is widely discussed, and – without any doubt- the underlying causes are very complex. It is also clear that there is not just one solution, but many different pieces must come together to tackle all multifaceted issues and to create a change in the research landscape.
Correct and detailed reporting is one important aspect, which enables the understanding and reproduction of experiments. However, it is often not possible to unambiguously identify the tools and resources that were used in published experiments. Hence, Anita Bandrowski and Maryann Martone established the Research Resource Identifier (RRID) and challenged the current practice of reporting reagents and even organisms (Bandrowski & Martone, 2016).
Bandrowski and Martone are the founders of SciCrunch, a data sharing platform offering plenty of possibilities to archive and share data, and also hosting the RRID portal. Finding the RRID for a specific reagent can be done on the SciCrunch-Webpage. This webpage also offers the possibility to set up RRIDs for resources not listed yet or newly developed research tools. But also vendors, such as BioLegend, are on board to add their reagents to the database and reflect the RRIDs on their website for each antibody.
All collected information is curated by SciCrunch employees before the RRID is assigned. Only such a rigorous process ensures correct information and tracking of research resources across different companies, especially when companies are acquired or sold and the catalogue number for the reagent changes.
The National Institutes of Health has issued a set of guidelines (e.g. NOT-OD-16-004) for areas where they believe experiments fail most frequently and created a set of new grant review criteria, overhauling how we fund research, and that overhaul includes a document that describes how labs will authenticate the resources they use, specifically antibodies, model organisms and cell lines. RRIDs are a way to take the first step in this authentication, which is to identify the correct reagents. In total, 571 journals (as of August 15th, 2018) including eLife, Society for Neuroscience journals, and all of Cell Press (part of the STAR methods) are encouraging the use of the RRID. With the RRID, “identifiability” of antibodies increases from 50% to well over 90% (Bandrowski et al, 2016) and while this does not ensure reproducibility alone, it is a really good step in the right direction.
PAASP members recognize from their own experiences that persistent unambiguous identification of biological and chemical reagents is an absolute necessity to produce reproducible research. Therefore, PAASP endorses the RRID and recommends the use during the PAASPort evaluation process. This globally used and unique identifier for research resources will lead to a better transparency of research.

Improving animal research reporting standards

For decades, scientists and organizations have expressed concerns about the quality of animal research reporting. These concerns and efforts to establish better standards along with guidelines for researchers have gained more attention and importance lately given the ongoing discussion about a “reproducibility crisis” in biomedical research.
Given the variation in awareness and implementation of current reporting standards, the International Council for Laboratory Animal Science (ICLAS) decided to seek harmonization on animal research reporting guidelines to encourage improvements in the quality of science where laboratory animals are involved. ICLAS believes that improving research reporting will aid the dissemination of responsible research practices worldwide and reduce the impact of cultural factors influencing the ethical use of animals. In this EMBO Reports publication, the authors present simplified and general reporting principles that would make it easier for both journals and authors to report details of animal experiments. Adoption and implementation of these general principles could improve reproducibility of research results and animal welfare globally.
LINK