Search
Search Menu

Estimation vs Hypothesis-Testing

As some of our readers may know, the PAASP team was involved in the “Negative results Award” project that took place in 2018. One important lesson for us was that any discussion about negative (or null) results should start with a decision whether results are really negative and how confident we are with such a conclusion. For example, p-values above 0.05 do not necessarily indicate that the results are negative and we have not found any reading or teaching material that can easily explain to non-statisticians and, in particular, young scientists in biomedical research fields the need to move away from a binary decision process. If any of our readers are aware of such tools, please share them with us and we will post the information in our Resource Center
 
Therefore, we were pleased to see a recent opinion paper by Calin-Jageman and Cumming that provides basic information with examples that could serve as a (self)-learning material. One particular example focussed on a study that found that “caffeine administration enhances memory consolidation in humans” (Borota et al., 2014). The same results were re-analyzed and visualized with a different (quantitative) question in mind: To what extent does caffeine improve memory? To answer this question, the difference between the group means was estimated and the uncertainties in this estimate due to expected sampling error were quantified. The results obtained can be summarized as “caffeine is estimated to improve memory relative to the placebo group by 31% with a 95% confidence interval of (0.2%, 62%)”.
 
Why do we believe that this is a good example that can be used for educational purposes?
First, the confidence interval suggests considerable uncertainty about generalizing from the sample to the world at large. Does it make such results of lower value? Certainly not.
Second, the estimation approach described in the eNeuro opinion paper does not call for a revolution that is difficult to follow (e.g. stopping the use of p-values). Instead, it suggests that we should rather go for a more complete and informative reporting of the results.
And last but not least, prior to reading the article by Calin-Jageman and Cumming, most members of our team have not heard about the eNeuro journal. Previously, learning about a new journal would certainly mean also inquiring about its “impact factor”. There is no need to know the impact factor of eNeuro – if the editors and reviewers of this journal manage to introduce and maintain high quality reporting, this should make it on the wanted list of all (neuro)scientists, whether readers or authors!

Leave a Comment

Required fields are marked *.


This site uses Akismet to reduce spam. Learn how your comment data is processed.

Scroll Up