Since several months, there is a Working Group of the Advisory Committee to the NIH Director (ACD) that is charged with assessing and making recommendations to enhance the reproducibility and rigor of animal research by improving experimental design, optimizing translational validity, enhancing training, and increasing the transparency of research studies involving animal models (LINK).
The recording of the most recent online meeting of the ACD can be found here: LINK (the WG presentation starts at 40:20)
As one outcome of the WG discussions, NIH has released a Request for Information (RFI) where it solicits input from stakeholders throughout the scientific research, advocacy, and professional societies, as well as the general public. The NIH seeks comments on the following topics:
Rigor and Transparency
The challenges of rigor and transparency in animal research and actions NIH can take to improve the quality of animal research including rigor and transparency.
How preregistration, the process of specifying the research plan in advance of the study and submitting it to a registry, would impact animal research including improving the quality of scientific research.
While preregistration is often considered in the context of hypothesis-testing and confirmatory experiments, would it be useful at other stages of the research process, such as the exploratory and hypothesis generating.
How to address the complexity and expense related to use of large animals, including nonhuman primates, that may provide biologically more relevant models.
How NIH can partner with the academic community, professional societies, and the private sector to enhance animal research quality though scientific rigor and transparency.
Optimizing the Relevance to Human Biology and Disease
Actions NIH can take to facilitate the translatability of animal research to human biology and disease.
How to encourage researchers to select or develop animal models with high utility and design experiments that have external validity to the clinical populations.
How NIH can partner with the academic community, professional societies, and the private sector to enhance animal research translatability.
How research culture drives the choice of animal models.
How incentives/disincentives in the research enterprise influence research using animals
How all researchers, including trainees, are educated in rigorous research design, statistical considerations, transparent research practices, and the role of NIH in this training.
The deadline to submit proposals is August 21, 2020.
The Global Preclinical Data Forum (GPDF), a partnership of Cohen Veterans Bioscience (CVB), a non-profit research biotech, and the European College of Neuropsychopharmacology (ECNP), is pleased to announce the opening of submissions for the 2020 Best Negative Data Prize competition. This prize, which was first launched in 2017, will be awarded to the researcher/research group whose publication in neuroscience research best exemplifies data where the results do not confirm the expected outcomes or original hypotheses. The call for submissions for the Best Negative Data Prize closes on May 31, 2020. The award itself is a monetary prize of €10,000 made available through the generous sponsorship support provided by CVB and will be awarded during the 2020 ENCP Congress in September 2020. Full details about how to apply for the Best Negative Data Prize can be found HERE.
Various modern technologies are known to bear risks to the human health. In many cases, the society learns about these risks only once negative consequences have become obvious, for example in the cases of asbestos, DDT and, most recently, the broad-spectrum systemic herbicide Glyphosate.
In the area of drug development, there is a risk mitigation strategy enforced by governmental agencies regulating market access for new drugs or new therapeutic uses of existing drugs. This strategy typically includes a thorough analysis of safety risks investigated in laboratory animals using a set of guidelines under conditions of Good Laboratory Practice (GLP).
In the area of environmental influences on human health, however, the current approach seems to be different. For example, in the US, although the Federal Communications Commission (FCC) and FDA are jointly responsible for the regulation of wireless communication devices, the FDA does not require safety evaluations based on GLP or applied OECD standards. Current safety regulations are essentially engineering standards that do not take into account potential impact on human health or physiology other than short-term heating risks (https://www.fcc.gov/general/radio-frequency-safety-0).
This is quite concerning as there is a growing body of publications suggesting that the radio frequency radiation (RFR) may have harmful biologic or health effects from exposure to RFR at intensities too low to cause significant heating.
Thus, we set out to characterize the internal validity of the results of published studies using laboratory animals that evaluated the effects of whole-body RFR at the intensities and frequencies (2G-5G) relevant to the current and emerging use of cell phones.
In a scoping review, we screened the literature collated from a source that is commonly cited in the public discussion of the potential harms of RFR (https://www.powerwatch.org.uk) to support the development of a search strategy and the definition of extraction terms for a subsequent systematic review. Further, this exploratory analysis has allowed us to formulate a research hypothesis: the internal validity of preclinical studies on whole-body effects of RFR does not enable policy making.
Search and screening strategy
Studies of animal models were identified from the PowerWatch database of 525 peer-reviewed scientific papers (Section: Mobile and Cordless Phones) about electromagnetic fields (EMF). Articles included after the title screening underwent concurrent full-text screening for definitive inclusion.
Inclusion and exclusion criteria
Publications were included that: a) described biological effects on whole-body RFR exposure in rodent model systems published in English language, peer-reviewed journals. Excluded were studies describing the effects of RFR exposure on in vitro or ex vivo test systems), Reviews, conference presentations, slides, posters and articles for which no full-text version could be obtained were also excluded from the analysis.
Study quality and risk of bias assessment
Publications identified were assessed against a four-point study quality checklist comprising the following internal validity criteria: (1) random allocation to group, (2) blinded assessment of outcome, (3) sample size calculation, and (4) inclusion / exclusion criteria. We recorded the number of checklist items scored.
Further, we evaluated reporting of a) compliance with animal welfare legislation, b) adherence to the ARRIVE guidelines, c) use of AAALAC accredited facilities, d) application of OECD or GLP standards, e) study of genotoxicity outcomes, f) study of carcinogenesis outcomes, and g) measurement of temperature/heating effects.
Each reference was evaluated by two independent reviewers. Disagreements between reviewers were resolved by a third reviewer and a consensus was reached.
The interpretation of the results provided in the PowerWatch database – that studies were positive (any effects from the radiation exposure, whether harmful or not) or negative (no effects) – was also recorded.
There was a total of 81 studies that met inclusion criteria. Of those, 60 studies reported effects (any) of RFR (according to the PowerWatch database) and we refer to them as “positive”.
The figure below presents numbers of “positive” and “negative” studies (% within corresponding category) that met none, 1, 2, 3, or all four interval validity criteria (LO, L1, L2, L3, L4, respectively).
Due to an exploratory nature of the analysis, we can only describe these results as suggesting greater internal validity in “negative” studies.
Out of all 81 studies, only one study met all Landis 4 criteria.
Out of 60 “positive” studies, only 2 studies met three out of four Landis criteria.
As summarized in the table below, while randomization (typically no method or details described) and blinding (typically limited to blinded outcome assessment) are mentioned in about 50% of reports, very few publications provided information on sample size calculation and/or inclusion/exclusion criteria.
Studies with negative results
Studies with positive results
Total number of publications
Reports followed ARRIVE guidelines
Conducted in AAALAC accredited facilities
Compliance with international or national animal welfare legislation stated
Any use of randomization mentioned
Any use of blinding mentioned
Any data inclusion or exclusion criteria mentioned
Sample size justified
Any OECD or GLP standards applied
Genotoxicity outcomes used
Carcinogenesis outcomes used
Heating / temperature effects measured
Conclusions and Outlook
While we would like to emphasize the exploratory descriptive analysis that was applied in our pilot study, results summarized above suggest that the internal validity of currently available studies on whole-body effects of radiofrequency radiation (technologies used in the 2G/3G cell phones) is too low to enable any policy making.
Together with our colleagues from QED Biomedical (https://www.qedbiomedical.com/), we have prepared a protocol for a systematic review that will be preregistered in a publicly accessible repository shortly (and this article will be updated with a link).
The first official meeting of the Preclinical Data Forum took place in Berlin in 2014 and one of the central seminars was given by Viswanath Devanarayan, head of Discovery Biostatistics at AbbVie at that time. And the first thing Devan did was to tell us the following story: A biologist and a statistician are in the death row, and are to be executed around the same time. They are each granted one last request. The statistician: “I want to give one last seminar”. The biologist: “I want to be executed first”. We invited Devan to explain the famous Ioannidis 2005 paper “Why Most Published Research Findings Are False”. Devan did a great job. Explanations were at our, non-statistician, level, simple and yet professional, convincing and leaving no doubt about the importance of the claims made in that paper.
In the ideal world, biomedical researchers would always go to professional statisticians for help. But we know that this is usually not possible for most preclinical scientists, who nevertheless need help and cannot get such simple answers from attending yet another statistics course or reading yet another book or other resource that is actually not meant for non-statisticians. We would like to collect typical data analysis problems that scientists often face and for which solutions are not readily available. We will approach professional statisticians and, with their help, try to develop answers and examples that could help our peer scientists.
Like everyone else we are with a great concern given the current coronavirus situation and the beginning of a global experiment: What are the right measures to fight the virus and for how long should these measures continue if the pandemic churns across the globe unabated? How can policymakers tell if they are doing more good than harm? In this opinion piece and an interview, John P.A. Ioannidis highlights the need to generate robust, reliable data so that decision-making can be based on solid facts, profound knowledge and a universal understanding of the spread and the tight control of this dangerous virus.
Due to the coronavirus, the anti-malaria drug chloroquine receives a lot of attention and excitement these days. Given the urgent need to find a treatment, it is very frustrating that here again we read worrisome reports about the quality of evidence related to this potential treatment for the coronavirus disease.
A blog post by Paul Glasziou from January 2013 starts with this headline, which is very much in line with our thinking.
Innovation is the cornerstone for progress in so many ways, but only the innovation that indeed proves to withstand rigorous testing will lead to lasting progress. Hence, evaluation should be seen as a tool to reach faster and more endurable progress rather than an unnecessary burden. This is the goal of our network, evaluating biomedical innovations in a mutual discourse and discussions on the way to be progress for patients.
To read Paul’s blog post, please follow THIS LINK.
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.