In this report, Wang and colleagues present the results of a national survey of nearly 400 consulting statisticians about requests from investigators to engage in inappropriate statistical practices.
The four most frequently reported inappropriate requests were (in order of frequency): a) removing or altering some data records to better support the research hypothesis; b) interpreting the statistical findings on the basis of expectation, not actual results; c) not reporting the presence of key missing data that might bias the results; and d) ignoring violations of assumptions that would change results from positive to negative.
Although the survey did not ask statisticians whether they fulfilled these requests, the inappropriate methods described in this report are still used in the published literature, and thus contribute to the problem of nonreproducible research.
Practices like these are very difficult to detect in published work; identification takes either unusual transparency or a time-consuming re-examination of the original research methods and data. At the moment, it can only be speculated whether these requests arise largely from researchers’ inexperience or their response to incentives that reward ‘positive’ results published in high impact factor journals.

LINK