In my never-ending quest to add to readers’ knowledge, or at least help find things to aid in getting a good night’s sleep, I run across some pretty esoteric research. I often save some of the oddball stuff for Fridays. Today’s post is in an area that I actually find really interesting–research and statistical design–but that I admit could be considered dry, even boring. The article in question reviews reporting of statistical uncertainty in oncology trials, but has broader implications. (JAMA Article) Almost all trials have results that are derived through statistical analysis, but those results have an uncertainty, or a range of possible truth, associated with them. Understanding that level of uncertainty is vital to understanding the actual results and applicable learning derived from the trial. The authors sought to ascertain whether published results (with a focus on the abstract) of oncology randomized clinical trials adequately conveyed the uncertainty associated with the results. Over 550 trials with reported P values of between .01 and .10 (something less than .05 is a traditional threshhold for considering experimental results to likely concord with truth, whatever that is) were assessed for their explanation of uncertainty. Their assessment of uncertainty reporting was largely based on three factors: was the reporting limited to the conditions of the trial; was speculative language used and whether the significance of results was done in statistical terms. A variety of characteristics of the trials were examined for potential association with how uncertainty in results was treated. 40% of reports did not express uncertainty at all; 29% had some explanation of uncertainty and 31% gave full uncertainty data. Characteristics of trials associated with greater explanation of uncertainty include later year of publication, a lower P value, non-cooperative group trials, and an end point other than overall survival. The overall conclusion is that publications conveyed less uncertainty about results than actually existed. This would naturally then tend to, for example, lead clinicians, and patients, to believe there was more benefit to a therapy than there really was.
Unfortunately, science has done itself no favor in recent years by allowing ideological and political considerations to creep into the process of developing and disseminating information. Science is critical. It is how we improve our knowledge and understanding about everything. Experimental design and statistical techniques are core to that development of scientific knowledge. But different experimental designs and statistical techniques can have strengths and weaknesses. Scientists need to be completely transparent about why they chose the design and statistics they did and they need to give us alternative analyses. Any finding has to be replicated several times to be treated as truly trustworthy. While this is important in all fields, it is absolutely critical in health and health care, where people’s lives are literally at stake, and how billions of dollars are spent may be determined by policymakers use of research. Oncology therapies are very expensive and often only offer a modest benefit in improved outcomes. Clinicians and patients are probably often being misled into use of these therapies.