Skip to main content

Interrupted Time-Series Experimental Designs

By August 14, 2019Commentary

Establishing cause and effect in a complex area like health care and health care policy is difficult.  So many factors are involved and changing at any one time.  Researchers get creative in trying to tease out the effect of a particular policy, but experimental design and statistics are full of traps and false leads.  A paper at the National Bureau of Economic Research looks at one commonly used design, the single interrupted time-series method, and finds it wanting.   (NBER Paper)   Randomized controlled clinical trials are the best way to understand cause and effect, although not perfect, but this design is really not too easy to apply to health care policy.  The single interrupted time-series attempts to ascertain the trend of an outcome before a policy change was introduced and compare that projected trend to what actually occurred.  The authors test the validity of the approach by comparing the results of an actual randomized trial with what the results would have been using a single interrupted time-series.  The test bed was Oregon’s Medicaid expansion, which gave a fixed number of uninsured residents, chosen by lottery, access to the program.  A number of studies have analyzed the effect of this expansion, which created a natural experiment for comparing those who won the lottery and got coverage and those who didn’t and remained uninsured.  One such study showed that, contrary to what was promoted by the expansion’s advocates, getting coverage actually increased emergency room use.

In the current paper, the authors went back and used the single interrupted time-series design to address the same issue, and found that doing so would have concluded that there was a large decline in ER use among those who gained coverage.  This result would have been the exact opposite of what actually occurred.  So use this design cautiously is one message.  And the authors do a good job of laying out the potential problems with the design and best practices to avoid contamination of results.  But it is truly frightening to realize that so much research that is published could have serious errors in design.  People rely on this research to make policy decisions and to evaluate the success of programs.  Research on research is some of the most valuable work done.  It seems pretty boring, but it has a transcendent importance.  And you should apply this lesson to areas other than health care.  It is worth remembering that research can be as faulty as any other human endeavor.

Leave a comment