When it comes to novel therapies, positive data has justifiably more value than negative or null data.  Indeed, positive data promises therapeutic benefit to patients in need and financial profit to those who advance such new therapy.

Even with long drug development timelines, positive data even from early-stage clinical trials often has an immediate impact and is reflected by increased shareholder value. 

Sage Therapeutics has recently presented early data from a phase 2 trial in 26 patients with mild cognitive impairment and mild Alzheimer’s disease dementia suggesting that their drug SAGE-718 after 14 days of once daily dosing improved several aspects of executive performance, learning and memory (improvement relative to baseline, as there was no placebo control arm; REFREF).

Alzheimer’s disease is still an area of major unmet medical need and intense drug development efforts.  Thus, any positive signal is highly welcomed by patients, caregivers and the research community.

The Sage study, however, had a big catch – it was unblinded, it had a small sample size and lacked a placebo control arm.  It was intentionally run this way even though everyone in the field, including the Sage team, knows that robust study design is essential for generating any meaningful data.  So what is the reason then to design and run the study in such a way?

This is how Sage’s chief development officer Jim Doherty explains it: “We call it serial de-risking. We learn a certain amount, and if what we see justifies the next investment, then we move to the next study and the next study (REF)”.

Given the many dozens of failures in this field, prior odds for seeing a true therapeutically relevant effect for any novel therapy are very low.  Similar to what was repeatedly discussed in various contexts (REF),   even a properly designed double-blind early-stage proof of concept RCT may not increase the post-study odds by much.

And one can state with certainty that a study done without due rigor and adequate controls for risks of bias is highly unlikely to increase confidence in the hypothesis.

So, back to our original question, what is the reason then?  Was it about generating positive preliminary data to attract further investments and thereby fund a more robust and definitive study?  Is it similar to generating “preliminary” data to support preclinical research grant applications?

In preclinical research, we can often learn from our big brother, the clinical research. We learn how to minimize the risks of bias, how to design high quality studies, etc.  

But do we also need to learn about the motivation for not following the best practices in study design?  Indeed, if this “serial de-risking” strategy is acceptable in clinical research, why would it not be acceptable in preclinical research?  Or, should ethical barriers for preclinical research involving animals actually be higher than for research in humans?