Over the past several years, more and more journals have revised their guides for authors and included specific instructions on information to be provided in the manuscripts – from animal welfare statements to various aspects of data analysis and study design.
A paper by Horowitz et al. is a good illustration of the change triggered by these new journal policies and the emerging challenges.
On a positive side, it is very good to read that power analysis was applied to determine the sample size, that all data generated or analyzed in this study were included in the paper, that randomization was applied, and that blinding was used and maintained throughout histological, biochemical and behavioral assessments and treatment groups were un-blinded at the end of each experiment upon statistical analysis.
Yet, we have previously expressed concerns (HERE and HERE) that, unless specific actions are taken and authors are appropriately trained and informed, changes in the journal policies may not always reach their objectives. More specifically, we are worried about normative responses especially regarding subjects that are not sufficiently disambiguated by the journals’ guides.
For example, when reading this particular paper, we could not understand why, in an experiment involving one control and one treatment group, sample sizes are markedly unequal – 12 vs 19. This is where we turned to the methods description in an attempt to understand how the randomization process was conducted, but found no details. We also looked for more information regarding sample size determination but found hardly any useful information either (apart from the generic alpha = 0.05 and beta = 0.8 levels). We have also contacted the authors.
We do not blame the authors for not providing all these information in the paper. However, we increasingly believe that it is actually the journals that cause this “tick-the-box” behavior when policy requirements are nominally addressed but without much value for the reader.
0 Comments
Leave A Comment