In registered reports (RRs), initial peer review and in-principle acceptance occur before knowing the research outcomes. This combats publication bias and distinguishes planned from unplanned research. How RRs could improve the credibility of research findings is straightforward, but there is little empirical evidence. Also, there could be unintended costs such as reducing novelty. In this article, 353 researchers peer reviewed a pair of papers from 29 published RRs from psychology and neuroscience and 57 non-RR comparison papers. RRs numerically outperformed comparison papers on all 19 criteria selected with effects ranging from RRs being statistically indistinguishable from comparison papers in novelty and creativity to sizeable improvements in rigour of methodology, analysis and overall paper quality. Thus, this article provides empirical evidence that RRs can improve research quality while reducing publication bias and ultimately improve the credibility of the published literature.

LINK