A consensus-based transparency checklist

Transparency of research processes is required not only for evaluating and reproducing results, but also for research synthesis and meta-analysis from the raw data and for effective replication and extension of that work.
To improve transparency, the authors provide a consensus-based transparency checklist for behavioural and social sciences, with a special focus on confirmatory work.
The checklist content was first developed by 45 journal editors as well as 18 open-science advocates and further fine-tuned using a preregistered Delphi approach.
Using an online template, it is possible to generate a report that can be submitted with a manuscript and/or posted to a public repository, thereby helping editors, reviewers and readers gain insight into the transparency of the submitted studies.
The checklist reinforces the norm of transparency by identifying concrete actions that researchers can take to enhance transparency at all the major stages of the research process.
This approach may also be followed in other fields of science – given that this checklist is built online using R shiny, one could easily develop a similar checklist based, for example, on the new ARRIVE 2.0 recommendations.



A long-awaited revision of the ARRIVE guidelines has finally been published (The ARRIVE guidelines 2019).What impact can we expect from ARRIVE 2.0? At the very least, we hope that the Nature journals will update their life sciences checklist, which will certainly have an impact. As an example, let’s look at the recently published paper that reported on the impact of gut microbiome on motor function and survival in the ALS mouse model.We picked this example because: i) it is recent, and ii) it uses the same SOD1 mouse model that has become a classical example of how adherence to higher research rigor standards turns “positive” data into “negative”.We would not use this example if the only finding was about changes in motor performance of the SOD1 mice exposed to long-term antibiotic cocktail – this is not be too surprising as antibiotics may penetrate into the CNS and may have effects unrelated to their “class” effects. And the survival data in the germ-free animals also do not make us focus on this paper because there were obvious problems with the rederivation itself (page 2, left column) and only insufficient information on study design is given (e.g. no details on surgery; no information on whether the colonized animals were also obtained via rederivation).The most striking are actually the data in Figure 3 “Akkermansia muciniphila colonization ameliorates motor degeneration and increases life-span in SOD1-Tg mice”.How would ARRIVE 2.0 help the reader to gain confidence in these data sets? In the Table below, we review the responses provided by the authors in the Life Sciences checklist and, stimulated by ARRIVE 2.0, indicate what information is missing to increase confident in these study results:
Published Life Sciences checklistWhat we would like to see
All in vivo experiments were randomized and controlled for gender, weight and cage effects.What methods were used for randomization and how can these methods explain highly unequal sample sizes within an experiment?
Sample sizes were determined based on previous studies and by the gold standards accepted in the field.A reference to the gold standards would be very helpful. The only gold standard in this field we are aware of – Scott et al. 2008 – would certainly not recommend using n = 5.
In all in vivo experiments the researches were blinded.This statement is insufficient to know whether blinding was maintained until data analysis.
No data were excluded from the manuscript.The experiment in Fig. 3 was repeated 6 (!) times with sample sizes between 5 and 26 resulting in the pooled sample sizes of up to 62 mice per group. However, survival data are presented only for 4-8 mice per group. It would be interesting to see survival data from the main pool of animals unless the main experiment was stopped at day 140.
All attempts of replication were successful and individual repeats are presented in ED and SI sections.Given that each of the “replication” experiments was severely underpowered, one may wonder whether these were indeed independent experiments or parts of a single study erroneously presented as “attempts of replication”.
We realize that ARRIVE 2.0 may not be sufficient to obtain answers to all of the above questions but this is certainly a major step forward that should be rigorously endorsed and promoted.

Toward Good In Vitro Reporting Standards

Many areas of biomedical science have developed reporting standards and checklists to support the adequate reporting of scientific efforts, but in vitro research still has no generally accepted criteria. In this article, the authors discuss ‘focus points’ of in vitro research, ensuring that the scientific community is able to take full advantage of the animal-free methods and studies and that resources spent on conducting these experiment are not wasted: A first priority of reporting standards is to ensure the completeness and transparency of the provided information (data focus). A second tier is the quality of data display that makes information digestible and easy to grasp, compare, and further analysable (information focus).
This article summarizes a series of initiatives geared towards improving the quality of in vitro work and its reporting – with the ultimate aim to generate Good In Vitro Reporting Standards (GIVReSt).


New Handbook of Experimental Pharmacology focusing on Good Research Practice

The Handbook of Experimental Pharmacology is one of the most authoritative and influential book series in pharmacology. It provides critical and comprehensive discussions of the most significant areas of pharmacological research.
PAASP members have contributed to the new Handbook of Experimental Pharmacology “Good Research Practice in Pharmacology and Experimental Life Sciences” as Editors and/or authors of individual book chapters such as:

Good Practice for Conference Abstracts and Presentations: GPCAP

GPCAP provides recommendations on good submission and presentation practice for scientific and medical congresses. These recommendations cover conference abstracts, posters and slides for oral presentations. GPCAP focuses on company-sponsored research i.e. research that is sponsored and/or funded by a pharmaceutical, medical device or biotechnology company. Company-sponsored research refers to all types of research, including preclinical and clinical, pre- and post-marketing.
Abstracts submitted to conferences as well as presentations (oral or posters) are not peer-reviewed. Therefore, it is important that they are prepared in a similar rigorous process as a full publication. GPCAP recommendations extend and complement the principles of the Good Publication Practice (GPP), provide “general principles of best practice for conference presentations and provide recommendations around authorship, contributorship, financial transparency, prior publication and copyright, to conference organizers, authors and industry professionals.”