Search
Search Menu

Additional Reads in August 2016

eNeuro editorial: Scientific Rigor or Rigor Mortis?
Christophe Bernard, editor in chief of eNeuro, discusses in this editorial the issue of scientific rigor with a reference to two commentaries by Katherine Button and Oswald Steward. To show that it is not a novel phenomenon, he gives the fitting example of a dispute between Louis Pasteur and Claude Bernard. Interestingly, though, since that time, clear guidelines and proper training do not have the full attention from the scientific community they deserve. To overcome this problem, scientists should be more critical with their own observations and make a clear statement when they perform exploratory studies. Secondly, scientists should receive better training related to Good Research Practices. eNeuro is establishing a webinar series to tackle the scientific rigor issue.
Commentary by Katherine Button: Statistical Rigor and the Perils of Chance
Button discusses the role of chance in statistical inference and how poor study design can lead to a high number of false-positive data. Furthermore, she claims that the current publication and funding system perpetuates this problem by encouraging the selective publication of positive data.
Commentary by Oswald Steward: A Rhumba of “R’s”: Replication, Reproducibility, Rigor, Robustness: What Does a Failure to Replicate Mean?
The commentary by Steward points out the many issues that are common practice in the daily lab routine and follow-up publications. He refers to “Begley’s 6 red flags” and provides a second list of points which he suggests to be called “the 6 gold stars of rigor”. These gold stars are suggested to be implemented as common publishing practices and e.g. include reporting of statistical power, requirement to report timing on data collection and to report all analysis in this context.
The discipline of biostatistics is nowadays a fundamental scientific component of biomedical, public health and health services research and, given the increasingly larger amounts of data, it is more important than ever to follow proper statistical practices. For that reason, Robert E. Kass and colleagues published ‘Ten Simple Rules for Effective Statistical Practice’ in PLOS Computational Biology. While the article appears in a computational biology journal, it is also highly relevant for other scientific areas and intended to support the research community how to avoid the pitfalls of well-intended, but inaccurate statistical reasoning. The ten rules are:
Rule 1: Statistical Methods Should Enable Data to Answer Scientific Questions
Rule 2: Signals Always Come with Noise
Rule 3: Plan Ahead, Really Ahead
Rule 4: Worry about Data Quality
Rule 5: Statistical Analysis Is More Than a Set of Computations
Rule 6: Keep it Simple
Rule 7: Provide Assessments of Variability
Rule 8: Check Your Assumptions
Rule 9: When Possible, Replicate!
Rule 10: Make Your Analysis Reproducible
Functional magnetic resonance imaging (fMRI) is 25 years old, however, most common statistical methods used for analyses have not been validated using real data. In addition, an investigation of 241 fMRI studies showed that 223 unique analysis strategies were used. This means that almost no strategy occurred more than once – although results can vary markedly depending on the analysis strategy (Carp, Neuroimage, 2012). In the context of a typical fMRI experiment, that could lead researchers to wrongly conclude that activity in a certain area of the brain plays a role in a cognitive function such as perception or memory.
In line with this, a new study ‘Cluster failure: Why fMRI inferences for spatial extent have inflated false-positive rates’ published in PNAS now reports that common settings used in software for analyzing brain scans may lead to a high rate of false positive results. Researchers led by Anders Eklund analyzed fMRI data from several public databases. Based on a significance threshold of 5% found in most publications, they expected to get a false positive result in 5% of the cases. Instead, depending on the software and the settings, the team found a false positive result up to 70% of the time.
Facebook
LinkedIn

Leave a Comment

Required fields are marked *.


This site uses Akismet to reduce spam. Learn how your comment data is processed.

Scroll Up