Research rigor in preclinical studies on biological effects of whole-body exposure to electromagnetic fields (2G/3G cell phone technology)

Various modern technologies are known to bear risks to the human health. In many cases, the society learns about these risks only once negative consequences have become obvious, for example in the cases of asbestos, DDT and, most recently, the broad-spectrum systemic herbicide Glyphosate.

In the area of drug development, there is a risk mitigation strategy enforced by governmental agencies regulating market access for new drugs or new therapeutic uses of existing drugs. This strategy typically includes a thorough analysis of safety risks investigated in laboratory animals using a set of guidelines under conditions of Good Laboratory Practice (GLP).

In the area of environmental influences on human health, however, the current approach seems to be different. For example, in the US, although the Federal Communications Commission (FCC) and FDA are jointly responsible for the regulation of wireless communication devices, the FDA does not require safety evaluations based on GLP or applied OECD standards. Current safety regulations are essentially engineering standards that do not take into account potential impact on human health or physiology other than short-term heating risks (https://www.fcc.gov/general/radio-frequency-safety-0).

This is quite concerning as there is a growing body of publications suggesting that the radio frequency radiation (RFR) may have harmful biologic or health effects from exposure to RFR at intensities too low to cause significant heating.

Thus, we set out to characterize the internal validity of the results of published studies using laboratory animals that evaluated the effects of whole-body RFR at the intensities and frequencies (2G-5G) relevant to the current and emerging use of cell phones. 

In a scoping review, we screened the literature collated from a source that is commonly cited in the public discussion of the potential harms of RFR (https://www.powerwatch.org.uk) to support the development of a search strategy and the definition of extraction terms for a subsequent systematic review. Further, this exploratory analysis has allowed us to formulate a research hypothesis: the internal validity of preclinical studies on whole-body effects of RFR does not enable policy making. 

Search and screening strategy

Studies of animal models were identified from the PowerWatch database of 525 peer-reviewed scientific papers (Section: Mobile and Cordless Phones) about electromagnetic fields (EMF). Articles included after the title screening underwent concurrent full-text screening for definitive inclusion.

Inclusion and exclusion criteria

Publications were included that: a) described biological effects on whole-body RFR exposure in rodent model systems published in English language, peer-reviewed journals. Excluded were studies describing the effects of RFR exposure on in vitro or ex vivo test systems), Reviews, conference presentations, slides, posters and articles for which no full-text version could be obtained were also excluded from the analysis.

Study quality and risk of bias assessment

Publications identified were assessed against a four-point study quality checklist comprising the following internal validity criteria: (1) random allocation to group, (2) blinded assessment of outcome, (3) sample size calculation, and (4) inclusion / exclusion criteria. We recorded the number of checklist items scored. 

Further, we evaluated reporting of a) compliance with animal welfare legislation, b) adherence to the ARRIVE guidelines, c) use of AAALAC accredited facilities, d) application of OECD or GLP standards, e) study of genotoxicity outcomes, f) study of carcinogenesis outcomes, and g) measurement of temperature/heating effects. 

Each reference was evaluated by two independent reviewers. Disagreements between reviewers were resolved by a third reviewer and a consensus was reached. 

The interpretation of the results provided in the PowerWatch database – that studies were positive (any effects from the radiation exposure, whether harmful or not) or negative (no effects) – was also recorded.

Results

There was a total of 81 studies that met inclusion criteria.  Of those, 60 studies reported effects (any) of RFR (according to the PowerWatch database) and we refer to them as “positive”.

The figure below presents numbers of “positive” and “negative” studies (% within corresponding category) that met none, 1, 2, 3, or all four interval validity criteria (LO, L1, L2, L3, L4, respectively).

Due to an exploratory nature of the analysis, we can only describe these results as suggesting greater internal validity in “negative” studies.

Out of all 81 studies, only one study met all Landis 4 criteria.

Out of 60 “positive” studies, only 2 studies met three out of four Landis criteria.

As summarized in the table below, while randomization (typically no method or details described) and blinding (typically limited to blinded outcome assessment) are mentioned in about 50% of reports, very few publications provided information on sample size calculation and/or inclusion/exclusion criteria.

All studiesStudies with negative resultsStudies with positive results
Total number of publications812160
Reports followed ARRIVE guidelines0.00.00.0
Conducted in AAALAC accredited facilities2.50.03.3
Compliance with international or national animal welfare legislation stated34.642.931.7
Any use of randomization mentioned55.666.751.7
Any use of blinding mentioned46.957.143.3
Any data inclusion or exclusion criteria mentioned11.114.310.0
Sample size justified7.428.60.0
Any OECD or GLP standards applied1.24.80.0
Genotoxicity outcomes used8.60.011.7
Carcinogenesis outcomes used7.423.81.7
Heating / temperature effects measured19.833.315.0

Conclusions and Outlook

While we would like to emphasize the exploratory descriptive analysis that was applied in our pilot study, results summarized above suggest that the internal validity of currently available studies on whole-body effects of radiofrequency radiation (technologies used in the 2G/3G cell phones) is too low to enable any policy making.

Together with our colleagues from QED Biomedical (https://www.qedbiomedical.com/), we have prepared a protocol for a systematic review that will be preregistered in a publicly accessible repository shortly (and this article will be updated with a link).

New Author Guidelines for Displaying Data and Reporting Data Analysis and Statistical Methods in Experimental Biology

To improve the robustness and transparency of scientific reporting, the American Society for Pharmacology and Experimental Therapeutics (ASPET), with input from PAASP’s Martin Michel, T.J. Murphy and Harvey Motulsky, has updated the Instructions to Authors (ItA) for ASPET’s primary research journals: Drug Metabolism and DispositionJournal of Pharmacology and Experimental Therapeutics, and Molecular Pharmacology. The revised ItA went into effect on January 1st 2020. Details and the underlying rationale are described in an editorial/tutorialthat appeared in all three journals.
Key recommendations include the need to differentiate between pre-planned, hypothesis-testing on the one, and exploratory experiments on the other side; explanations of whether key elements of study design, such as sample size and choice of specific statistical tests, had been specified before any data were obtained or adapted thereafter; and explanations of whether any outliers (data points or entire experiments) were eliminated and when the rules for doing so had been defined.
 
Importantly, Molecular Pharmacology has established a dedicated review process for each manuscript received to check compliance with the new guidelines. This is in contrast to JPET and DMD, which do not have similar policies in place (yet).
 
It will be interesting to analyze the impact of Mol Pharmacol’s additional review of manuscript and guideline compliance after a certain period of time.
Indeed, Anita Bandrowski and colleagues have recently shown that identifiability of research tools like antibodies was dramatically improved in journals like eLife and Cell since 2015/2016 compared to e.g. PLOS ONE. The reason identified was that both journals (eLife and Cell) not only changed their guidelines to make them more visible but also proactively enforced them. PLOS ONE also changed their ItA to improve how they describe research tools, but without the same level of active enforcement.
 
We do hope that the ASPET’s new instruction to authors will have a positive impact and will set an example for other journals and learned societies to follow. We also hope that the efforts by ASPET will further demonstrate the role of enforcing the guidelines and instructions.

PAASP US, LLC receives funding from NIH

The PAASP Network is very glad to announce that the NIH has awarded an SBIR grant to a Network member – PAASP US, LLC.

Robust nonclinical data is an absolute requirement for building an effective translational strategy and developing clinically safe and effective medications.  Most current efforts to facilitate generation of robust nonclinical research data focus on producing guidelines and checklists pertaining to study design and data analysis, but do so mainly from a reporting perspective, rarely consider processes other than study design and data analysis (such as compliance of data records with FAIR principles), and are at risk of triggering normative responses, whereby research teams simply satisfy the guidelines at a time when it is too late to take corrective actions.  

The PAASP team will work on a novel research evaluation tool (PAASPort®) for early identification of potential risks of bias related to nonclinical research practice. Created by industry and academic scientists working with business professionals, PAASPort® will allow funding organizations such as private investors, non-profit foundations, corporate and non-corporate VCs to inform both financial (investment) and portfolio (science) decisions by supporting research with low risks of bias, thereby improving the probability that basic nonclinical discoveries will translate into safe and effective clinical treatments.

The PAASPort® tool will consist of a web application with proprietary analytics supporting project-specific and research unit-specific certification.  During Phase 1 of the project, the team will convert the existing paper-and-pencil PAASPort® prototype into a web-based, interactive digital tool.  As the next step, Phase 2 of the project will focus on developing and implementing analytical mechanisms to support semi-automated processing of information collected from the online assessment and demonstrating economic benefits of improved nonclinical research practice quality assessments for private investors.

The project will be led by Dr. Daniel Deaver (CSO, PAASP US) and Dr. Andre Der-Avakian (COO, PAASP US), supported by a highly motivated team of PAASP US employees, advisors and consultants across multiple locations in the US and Europe.

About PAASP Network

PAASP (Partnership for Assessment and Accreditation of Scientific Practice; www.paasp.net) is an unincorporated association of legally and operationally independent entities (one of which is PAASP US, LLC) acting in different countries but sharing the core infrastructure and united in the global goal to promote research quality standards.

Each Network member is independently working with local customers and partners in its geography, seeks to obtain funding and generate revenue for the operation of their own member organization, and is entering collaborations and developing novel products.

As of January 2020, the Network includes members and member organizations in the US, Germany, Benelux, Russia, Baltic States, and France.

About SBIR

The Small Business Innovation Research (SBIR; www.sbir.gov) program is a highly competitive program that encourages US domestic small businesses to engage in Federal Research/Research and Development (R/R&D) that has the potential for commercialization. Through a competitive awards-based program, SBIR enables small businesses to explore their technological potential and provides the incentive to profit from its commercialization. By including qualified small businesses in the nation’s R&D arena, high-tech innovation is stimulated and the US gains entrepreneurial spirit as it meets its specific research and development needs.


Most innovations are not advances: innovation + evaluation = progress

A blog post by Paul Glasziou from January 2013 starts with this headline, which is very much in line with our thinking. 

Innovation is the cornerstone for progress in so many ways, but only the innovation that indeed proves to withstand rigorous testing will lead to lasting progress. Hence, evaluation should be seen as a tool to reach faster and more endurable progress rather than an unnecessary burden. This is the goal of our network, evaluating biomedical innovations in a mutual discourse and discussions on the way to be progress for patients.

To read Paul’s blog post, please follow THIS LINK.

PAASP at the REWARD/EQUATOR Conference in Berlin (February 20-22, 2020)

We will present our activities to improve Good Research Practice and Data Quality and are very much looking forward to meet you in Berlin.
The topic of the conference is: Sharing Strategies for Research Improvement – Challenges and opportunities for Improvement for Ethics Committees and Regulators, Publishers, Institutions and Researchers, Funders – and Methods for measuring and testing Interventions.
The conference is hosted by the BIH QUEST Center and a preliminary program is available HERE.

Be positive about negatives – recommendations for the publication of negative (or null) results.

PAASP’s Anton Bespalov together with Phil Skolnick (Opiant) and Thomas Steckler (Janssen) address the issue of negative data being rarely disclosed in a form of scientific publications. There seems to be general reluctance to publish negative results, due to a range of factors (e.g., the preference to publish positive findings that are more likely to generate citations and funding for additional research).
In this article, the authors describe a set of criteria that can help scientists, reviewers and editors to publish technically sound, scientifically high-impact negative (or null) results originating from rigorously designed and executed studies. Proposed criteria emphasize the importance of collaborative efforts and communication among scientists (also including the authors of original publications with positive results).