Comparative effectiveness research is one of the most discussed aspects of health reform proposals. Seen by some as potentially saving tens of billions of dollars in unnecessary care; others view it as a first step toward rationing. Several reports and commentaries relating to comparative effectiveness research have been released in the last week. Two government groups with legislated responsibilities to determine priorities in regard to the research issued reports. The Federal Coordinating Council for Comparative Effectiveness Research, established by the stimulus bill to oversee the federal government’s comparative effectiveness efforts issued a lengthy report that really didn’t contribute a lot. (Council Report) The group issued a definition of comparative effectiveness research, created criteria for setting priorities and set up a “strategic framework” for conducting the research program. The only priorities identified by the department were that the primary investment should be on data infrastructure and the secondary investments should be in dissemination and translation of findings, research on priority populations and priority types of interventions. The priority interventions included rehabilitative devices, procedures and surgery, diagnostic testing, behavioral change, delivery system strategies and prevention. Creating a good data infrastructure definitely creates a foundation for the research, but it does nothing to quickly provide more concrete evidence to be used in developing care guidelines which might actually save money. Creating the data infrastructure is also likely to be more expensive and take more time than initially estimated; as is almost always the case with IT-related projects. The report also gives the clear impression that politics will be at play in deciding which populations and projects are most worthy, when the only logical criteria are the amount of spending that might be affected by the research, coupled with considerations of how quickly and inexpensively the research can be completed and disseminated.
The other governmental report came from the Institute of Medicine and is more useful. (IOM Report) The IOM also defined comparative effectiveness research. The group then went on to set out 100 specific research projects as priorities. Effectiveness assessments were recommended for treatment strategies for atrial fibrillation; different treatments for hearing loss; preventing falls in older adults; evaluating upper endoscopy use for gastroesophageal reflux disease, and biologic use for inflammatory diseases. Some process related research was also recommended, including effectiveness of dissemination techniques for the results of the research and evaluation of comprehensive care coordination programs. The IOM also set out its view of requirements for a sustained research program.
In a Health Affairs article, a definition of “marginal medicine” was set forth, with the purpose of assisting in identifying priority areas for comparative effectiveness research. (HA article) The authors set out four categories of marginal medicine: lack of evidence for any indication; lack of evidence for use beyond those for which benefit has been found (i.e., off-label drug use); a higher cost option which has no greater clinical outcome; and a higher cost option which has only an incremental benefit. The authors then looked at methods of conducting comparative effectiveness research for each of these categories, including literature reviews and meta-analysis, modeling, observational analyses and randomized controlled trials. The authors believe research findings which fall in the first and third categories will be non-controversial. The fourth is likely the most controversial. The article is a useful contribution in thinking about how to prioritize and conduct comparative effectiveness research.
The last piece is a JAMA commentary, vol. 302, p. 194, which argues that comparative effectiveness research should be focussed on areas that would reduce the cost of care by at least a specific amount and that could be immediately implemented. The author makes a very good point that having a rigorous and expansive comparative effectiveness research program could facilitate the health industry becoming more focussed on developing products and procedures that reduce cost. He points to the information technology sector as being one where better functionality and lower cost is constantly expected in each new generation of products.