Poor statistical reporting persists despite editorial advice


Scientific discoveries must be reported accurately. If not, the general public will lose trust and question why their tax dollars are being wasted.

Unfortunately, the quality of research reports in the biomedical sciences is generally poor. Basic statistical reporting is inadequate, and spin – the distorted, self-serving interpretation of results – is common.

In an attempt to improve the quality of research reports in physiology and pharmacology, the Journal of Physiology and British Journal of Pharmacology jointly published an editorial series of guidelines in 2011 on best-practices for data analysis, presentation of data, and reporting of results.

The editorial series included several sensible recommendations. For example, medical researchers and scientists should be exact when reporting statistical results, they should use correct summary measures, and they should include all data when plotting results – a picture is worth a thousand words.

In our recent study, we examined if these recommendations improved reporting practices. We developed a 10-question audit based on key recommendations from the editorial series. We then used the audit to assess 400 papers published in the Journal of Physiology and the British Journal of Pharmacology before and after the editorial series to determine whether the reporting of statistical and other results had improved.

 

WHAT DID WE FIND?

Overall, the editorial series had no impact on the quality of scientific reporting. More than 15% of papers did not define how variability of the data were summarised. When they did, a staggering 80% of papers summarised data variability using the incorrect measure. Nearly all papers failed to report exact statistical results for all statistical analyses.

Less than 5% of papers showed individual subject data when plotting results.

When results were close to statistical significance (i.e. p values between 0.05 and 0.1), scientists erroneously interpreted these results as statistical trends more than half the time. Here is an example from one of the papers we examined: “The level of IL-15 mRNA tended to be lower in vastus lateralis than in triceps (P = 0.07).”

 

SIGNIFICANCE AND IMPLICATIONS

In line with previous work, the quality of statistical and result reporting in science is generally poor. Furthermore, editorial guidelines do little to improve reporting practices. Hence, we need better strategies to improve scientific communication.

Why are reporting practices not changing? The ever increasing pressure to publish papers may be partly to blame for poor reporting practices. Statistically significant and visually appealing results are more likely to be accepted for publication. Thus, scientists have come to realise, consciously or not, that ‘if you torture your data long enough, it will confess to anything’.

At present, there are few incentives to conduct and report science rigorously. Thankfully, the call to improve reporting practices is getting louder. There are now consolidated expert guidelines for good statistical reporting practices in biomedical research, in addition to those developed by specific journals.

How might scientific communication be improved? Individual scientists can improve their statistical fluency and up-skill with courses and workshops. Journals can improve reporting practices by enforcing adherence to reporting guidelines. Funders, research institutions, and scientific associations can introduce career incentives to conduct and report higher quality, reproducible science, and audit labs and their publications to help stamp out shabby science.

These strategies may be difficult to implement, but they are needed for scientific discoveries to be reported accurately. The public and the next generation of scientists deserves this.

 

PUBLICATION REFERENCE

Diong, J, Butler, AA, Gandevia, SC, Héroux, ME. Poor statistical reporting, inadequate data presentation and spin persist despite editorial advice. PLoS ONE 13, 8: e0202121, 2018.

 

KEY REFERENCES

Lang, TA, Altman, DG. Basic statistical reporting for articles published in biomedical journals: the “Statistical Analyses and Methods in the Published Literature” or the SAMPL Guidelines. Int J Nurs Stud 52: 5-9, 2015.

Moher, D, Naudet, F, Cristea, IA, Miedema, F, Ioannidis, JPA, Goodman, SN. Assessing scientists for hiring, promotion, and tenure. PLoS Biol 16, 3: e2004089, 2018.

Wood, J, Freemantle, N, King, M, Nazareth, I. Trap of trends to statistical significance: likelihood of near significant P value becoming more significant with extra data. BMJ 348: g2215, 2014.

 

AUTHOR BIO

Dr. Joanna Diong is a Lecturer in anatomy at the Faculty of Medicine and Health, University of Sydney. Her research investigates the mechanisms of impaired human movement after stroke using kinematics and electromyography. She is an advocate of good research practice, and she co-authors the blog Scientifically Sound in her spare time.

Leave a comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.