In which journal a scientist publishes is considered one of the most crucial factors determining their career. The underlying common assumption is that only the best scientists manage to publish in a highly selective tier of the most prestigious journals. However, in this article, Björn Brembs summarizes several lines of evidence suggesting that the methodological quality of scientific experiments does not increase with increasing rank (i.e. increasing impact factor) of the journal and experiments reported in high-ranking journals are often even less methodologically sound than those published in other journals. The data supporting these conclusions are based on quantifiable indicators of methodological soundness in the published literature, e.g. Crystallographic Quality, Effect Sizes in Gene-Association Studies, Statistical Power in Neuroscience/Psychology, Experimental Design in in vivo Animal Experimentation, Errors in Genomics, Cognitive Neuroscience and Psychology amongst others.
The author concludes that even under the most conservative interpretation of the data, the most prestigious journals, i.e., those who command the largest audience and attention, at best excel at presenting results that appear ground-breaking on the surface – suggesting that using journal ranking systems as selection pressure in hiring, promotion and funding decisions can lead to an increased frequency of questionable research practices and lower research data quality.
The author concludes that even under the most conservative interpretation of the data, the most prestigious journals, i.e., those who command the largest audience and attention, at best excel at presenting results that appear ground-breaking on the surface – suggesting that using journal ranking systems as selection pressure in hiring, promotion and funding decisions can lead to an increased frequency of questionable research practices and lower research data quality.
0 Comments
Leave A Comment