Preclinical in vivo scientists often have to deal with a very interesting situation: Imagine we design a study to demonstrate that “our” drug can exert certain pharmacodynamic activity. As it unfortunately happens rather often, we design this study to include only one dose of our drug (e.g. because we have no resources to study more doses or are not aware of the importance of addressing the dose-dependence of pharmacological action).
However, our study does include a so-called positive control. This is typically a drug that has previously been tested under similar conditions and was found to exert the desired effects. Such positive controls can be well established (i.e. efficacy demonstrated on multiple occasions with sufficient rigor) or less well established. In the latter case, the question is: Why would a “positive control” that is not well established be included in the experimental setting?
Let’s go back to our study – we have two drugs (the test compound and a positive control) and a negative control (i.e. vehicle used to dissolve these drugs). After the study is completed and all raw data processed, we will assess whether the recorded parameters were affected by the exposure to our drug and the positive control.
There are theoretically four possible outcome scenarios as summarized in the Table below:
If both our drug and the positive control worked, we conclude that the experiment was successful – as we had hoped. If both our drug and the positive control failed, we blame the test system (positive control did not work) but maintain trust in our drug. If our drug failed but the positive control worked, we will have to consider our goal not being reached.
But what if our drug worked and it is the positive control that failed? In this case, we hear voices: “Isn’t that great – our drug is better than the positive control!…This positive control was not good anyway… In the end, we are interested in OUR drug’s efficacy, aren’t we?” So, it may be very tempting to interpret the outcome in favor of our drug!
Why did we say at the beginning that this situation may be more familiar to in vivo scientists? Because the voices and suggestions to disregard failure of positive controls get stronger when such studies are not isolated experiments but are the final steps of a bigger project that involved many colleagues who developed the hypothesis and generated supporting evidence using other methods. In vivo studies are, for obvious reasons, typically at the end of this supporting-evidence-generating chain (see LINK for more examples and discussion of the associated pressure).
Yet, the solution to this problem of using unestablished positive controls is quite simple: when planning a (confirmatory) study, hypotheses need to be pre-specified and the outcomes of an experiment need to be explicitly defined, involving any positive controls included. If one does not pre-specify how the study outcomes will be interpreted and used for decision-making, studies can be designed to bias the interpretation in a favored direction and one would indeed be tempted to take advantage of a biased interpretation of study outcomes as illustrated above.
0 Comments
Leave A Comment