Most discussions of reliability problems in science focus on systematic biases. In this article, Nate Breznau and colleagues broaden the lens to include conscious and unconscious decisions that researchers make during data analysis and that may lead to diverging results. The authors coordinated 161 researchers in 73 research teams and observed their research decisions as they used the same data to independently test the same prominent social science hypothesis: that greater immigration reduces support for social policies among the public.
The authors found that research teams reported both widely diverging numerical findings and substantive conclusions despite identical start conditions. Researchers’ expertise, prior beliefs, and expectations barely predict the wide variation in research outcomes. More than 90% of the total variance in numerical results remains unexplained even after accounting for research decisions identified via qualitative coding of each team’s workflow. This reveals a universe of uncertainty that is otherwise hidden when considering a single study in isolation. The idiosyncratic nature of how researchers’ results and conclusions varied is a new explanation for why many scientific hypotheses remain contested. These results call for epistemic humility and clarity in reporting scientific findings.
0 Comments
Leave A Comment