Christophe Bernard, editor in chief of eNeuro, mentions in this editorial the issue of scientific rigor with a reference to two Commentaries by Katherine Button and Oswald Steward. To show that it is not a novel phenomenon, he gives the wonderful example of a dispute between Louis Pasteur and Claude Bernard. Interestingly, since then clear guidelines and proper training do not have the focus they should have. To overcome this problem, scientists should on the one hand be more critical with their own observations and make a clear statement when they have exploratory data. On the other hand, scientists should receive better training on scientific rigor. eNeuro is establishing a webinar series to tackle the latter issue. Link

Commentary by Katherine Button: Statistical Rigor and the Perils of Chance

Button discusses the role of chance in statistical inference and how poor study design lead to a high number of false-positive data. Furthermore, she claims that the current publication and funding system perpetuates this problem by only encouraging to publish positive data. Link

Commentary by Oswald Steward: A Rhumba of “R’s”: Replication, Reproducibility, Rigor, Robustness: What Does a Failure to Replicate Mean?

The commentary by Steward points out many issues that are common practice in daily lab routine and the follow up publications. He refers to “Begley’s 6 red flags” and provides another list of points which he suggests could be called “the 6 gold stars of rigor”. These gold stars are suggested to be implemented as common publishing practices and e.g. include reporting of statistical power, requirement to report timing on data collection and to report all analysis in this context. Link