We strongly believe that, in most cases, preclinical in vivo studies conducted without adequate protection against risks of bias (i.e., underpowered, without randomization, blinding, and pre-specified hypothesis and data analysis) are likely to generate results with little or no value irrespective of the financial, time and labor costs invested.

Yet, we do agree that there are situations when judgement is not black-or-white and a creative approach is to be identified and implemented to secure high rigor in the absence of certain conventional protective measures.

Our attention was caught by the following discussion by Everitt and colleagues (REF1):

“There are few topics in pathology as controversial as the issue of “blind reading of slides.” This is when the pathologist has no knowledge of the treatment group status of an individual animal during the histologic evaluation of a tissue section. The position of most pathologists, regulators, and pathology professional societies is that the pathologist should initially evaluate an animal with full knowledge of control and treatment group allocation (although information on the actual treatment can be withheld) and full individual animal data.”

Especially in the regulated research area where many histopathological assessments are conducted, nonclinical studies support important risk assessments or business decisions. In such cases, pathology peer review is recommended (REF2).

Given that histopathologic evaluation especially in the context of regulated research is a highly specialized endeavor, we trust that the experts in this field have reasons for subjecting only subsets of data to peer review (REF2REF3) and for not applying blinding – although masking slides after initial unblinded assessment to compare specific findings was found to minimize bias of histopathology studies (REF1).

Leaving the histopathology practices aside, there are several things that we can take from the discussion on blind reading of histopathology slides and that can benefit the design and conduct of nonregulated research, our primary area of focus:

Peer review is essentially a way to do a confirmatory analysis without conducting a separate confirmatory study. In other words, it allows to have both exploratory and confirmatory analyses with the same set of raw data. In certain fields of nonregulated research, a similar approach can be applied if three conditions are met:

  • Studies are fully reconstructable (i.e., all information about who, when and how has conducted the study with every step in data collection and processing properly documented);
  • Raw data are properly recorded and stored (i.e., following common standards such as ALCOA; REF4); and
  • Raw data sets along with the data analysis plan (and tools, if necessary!) are made available to scientists maximally independent from the originator team.

The latter notion of independent “peer reviewers” is particularly important. We often hear that these days even large biopharma prefers to outsource most preclinical safety GLP studies. It is then argued that CROs are not biased by the project hypotheses or the pressure to advance the drug product along the development funnel. Indeed, assuming adherence to all applicable business and regulated research standards (all verifiable!), outcome of a toxicological study does not have an impact on whether a CRO will be contracted again by the same sponsor.  

In the non-regulated research environment, it is more difficult to achieve such independence. For example, even if the best business and research standards have been maintained, a negative outcome of a study may have an undesired impact (e.g., if a positive control fails, a model or research method will be deemed as not reliable enough and the performing team may get into trouble). Thus, the role of “independent peer reviewers” should be assigned to reviewers, editors and eventually readers who are given full access to all data and analyses supporting the conclusions presented in the research report. 

Thus, there are cases when blinding is not applied but the resulting risk of bias can be effectively mitigated by high-quality management of data (including all meta-data) and data sharing (as open as possible).