Adequate sample size is key to reproducible research findings: low statistical power can increase the probability that statistically significant results are false positive. To increase data robustness and reproducibility, journals started to implement measurements such as reporting checklists. In this paper, Carter et al. conducted a systematic review comparing articles submitted to Nature Neuroscience 3 months before (n=36) with articles published 3 months after (n=45) the introduction of the checklist. As an additional control, these references were also compared to 123 publications from the Journal Neuroscience (same 3-month period), which has not implemented a checklist to date. The authors found that although the proportion of studies reporting on sample sizes increased after checklist introduction (22% vs 53%), the proportion reporting formal power calculations decreased (14% vs 9%). Using sample size calculations for 80% power and a significance level of 5%, little evidence was found that sample sizes were adequate to achieve this level of statistical power, even for large effect sizes. The authors conclude that reporting checklists may not improve the use and reporting of formal power calculations. Therefore, journals need to consider strict enforcement of the checklists to make meaningful differences. LINK
0 Comments
Leave A Comment