We have all kinds of measures for everything in health care. How do we know that those measures actually capture what they are supposed to, or do so in an accurate or reliable way? Research about research methods is very important to help us trust the data we are relying on for policy and operational purposes. A study in Health Services Research looks at the validity and reliability of patient-reported experience of care measures, which are heavily used in a number of reimbursement methods. (HSR Article) Patient experience measures are supposed to be different from patient satisfaction and to identify more specifically what actually happened during a medical care episode. I suspect that from the patient perspective there isn’t a lot of difference and that a patient’s reporting on aspects of care is heavily flavored both by their satisfaction with the care and their expectations about what the care would be like before it actually occurred. Nonetheless, such measures could be helpful for providers in attempting to improve patient satisfaction and meet expectations or modify them to be more realistic. The validity of a measure revolves around whether it is actually measuring what it is supposed to and the reliability refers to whether it consistently measures the same thing across providers and circumstances. The authors looked at a range of current measures to see if they met these criteria. They found 109 articles which described the development and testing of a patient experience measure. They used a standard tool designed to identify bias or other issues with measure development and research. It has 20 items to be analyzed, and the researchers grouped the patient experience measures by how many of these items they satisfied. They also used a second tool which appears more focussed on the statistical testing of the measures.
A total of 88 patient experience measures were examined. About a third related to inpatient care and a fourth to primary care. The measures came from a variety of countries. 45% of the measures were administered by mail, 33% face-to-face and 14% by phone. On the first instrument used to test the measure, which had the 20 criteria, 63% of papers regarding patient experience measures scored greater than 15 and 28% between 10 and 15. The major issues identified were adequate accounting for response rate bias, lack of description of non-responders and getting IRB or patient consent. Using the other tool, the researchers found that many of the criteria were not used, for example reliability testing, hypothesis testing, cross-cultural validity, criterion validity and measurement error agreement were not used in over half the testing of patient experience measures. The authors note that one additional issue with the measures is that they tend to focus on one episode of care, but with the prevalence of chronic conditions and the emphasis on care coordination, it would be helpful to have more measures that focus on broader episodes. The good news is that the authors concluded that most of the measures do test what they are intended to test and do so reliably. But measure developers should be using a more consistent and comprehensive approach to their work.