martes, 6 de octubre de 2009
NQMC - Expert Resources - Expert Commentary
Expert Commentary
Perspective
Envisioning the Next Generation of Performance Measures for Ambulatory Care
By: Stephen D. Persell, MD, MPH
During the past few decades, an increased emphasis on measuring and improving the quality of health care in the United States fostered considerable improvements in the quality of care delivered (1,2), but important quality gaps still remain. The measurement tools widely used to assess quality during this time, while useful, have proven rather blunt instruments. For outpatient settings, the most common ambulatory care performance measures used to assess clinical efficacy have been either simple process of care measures (e.g., the proportion of eligible patients who received a laboratory test or a prescription medication) or intermediate clinical outcomes (e.g., the proportion of eligible patients who achieved a specific level of control for their high blood pressure or diabetes at the last measurement during a fixed time period). Many of these measures were designed so that they can be assessed using data routinely collected for other purposes (e.g., administrative claims) or to be performed using very limited chart reviews.
Measures like these probably do permit valid comparisons of the quality of care received by large groups of patients. The use of these kinds of measures in public reporting or accreditation programs has undoubtedly had a large, favorable impact on the quality of health care. However, when we consider how performance measures are currently being used and how we should measure performance in the future, it is essential to bear in mind that important limitations to simple measures exist.
Strengths and Weaknesses of Simple Quality Measures
Simple process-of-care measures reflect the care that patients actually receive. Processes of care may be more directly under the control of health care providers and less influenced by differences in the characteristics of the patients served by one group of providers compared to another (i.e., differences in case mix) than outcome measures. When deficits in the delivery of a process are discovered, these findings may be more directly actionable for providers or health systems than deficits in outcomes. Unfortunately, simple process measures often connect very loosely to important clinical outcomes. Quality improvement efforts that successfully improve simple process measures may have little or no effect on improving clinical outcomes. (3,4) For example, increasing lab testing for patients with diabetes may be far easier to accomplish than controlling blood pressure, blood sugar, or cholesterol. Simple measures—whether or not a test was done—also do not assess how well the information obtained from that test was used to care for a patient.
Outcomes are often what patients and other stakeholders care about the most. Outcome measures have high face validity and may be more informative than process measures because they may reflect both the measured and unmeasured processes that go into achieving the desired result. Outcome measures for ambulatory care, however, are often only partially under the control of individual health care providers (e.g., non-adherence to recommended care plans may be a difficult behavior for health care providers to change). Also, the results of outcome measures may have more to do with the characteristics of the patient populations involved (e.g., the prevalence of preexisting severe disease) rather than with the quality of care provided; however, methods attempting to account for differences in case mix are generally not used. Simple outcome measures designed to minimize the amount of data collection needed may also not accurately indicate who is receiving good care or substandard care at the individual patient level. For example, a patient whose average blood pressure is controlled may appear to be uncontrolled by a single reading; similarly, a patient prescribed multiple appropriate antihypertensive medications may still not achieve a blood pressure of < 140/90 mm Hg.
Implications of Using Simple Measures in the Contemporary Environment
Measures designed to allow for comparisons in quality between groups as large as health plans, serving hundreds of thousands or millions of members, are now being applied to much smaller groups of patients, namely those served by individual practices or physicians. Increasingly, this is being done with financial incentives attached. (5,6) Simple measures picked for ease of data collection could produce unintended consequences if they incorrectly suggest that individual patients are receiving poor care.(7-9) A perverse incentive may result, potentially motivating physicians to stop caring for patients who are the most vulnerable if physicians feel that the measured outcomes on which their remuneration depends are not within their control—perhaps due to patients having severe disease, treatment intolerance, or being non-adherent to care plans. They might also begin recommending tests or treatments to patients who may be unlikely to benefit, or who may be harmed by them.
Improving Performance Measures by Using More Clinical Data
If more detailed clinical data could be obtained without great difficulty, it may become possible to devise performance measures that provide a better reflection of who is receiving inadequate care, which may be more suitable for measuring quality at the small group level. The wealth of clinical information available in electronic health record systems (EHRS) could serve this purpose.
In one study, colleagues and I examined how hypertension quality measures that drew from a multiple of different data elements in an electronic health record could be designed to be more specific indicators of significant quality problems than a simple outcome measure (i.e., last blood pressure of less than 140/90 mm Hg). (10) In a population of 3,913 patients with diagnosed hypertension, blood pressure control judged by the last office measurement of < 140/90 mm Hg during the study period—the simple outcome measure—was 58.1%. However, small modifications to measurement criteria produced large changes in the proportion of patients identified as receiving adequate care. Counting patients as having adequate care whose last or mean blood pressure (based on up to three recent readings) was at or below 140/90 Hg raised performance to 75.4%. When the process of being prescribed aggressive treatment—measured as being prescribed at least a 3-drug regimen including a diuretic or 3 drugs not including a diuretic but having a diagnosis that was a potential contraindication to a diuretic—was included as part of the numerator criteria for the measure, performance rose to 82.5%. Accounting for low diastolic blood pressure—to remove the incentive to overtreat patients with potentially dangerously low diastolic blood pressure—raised performance to 83.6%. The quality deficit that remains according to the complex measure is much smaller than for the simple measure, but the result may be a more specific indicator of true quality problems. If financial incentives were applied to this measure, it would not penalize providers who care for large numbers of patients with resistant and difficult to control hypertension. In addition, it might prove less likely to motivate overtreatment of individual patients whose blood pressure was actually controlled or for whom intensifying therapy could increase risk.
Complex measures such as the one in this example that are part outcome, part process of care may have some of the advantages of both simple process measures and simple outcome measures. Patients whose outcomes are difficult to control may still receive appropriate care. In this hypertension example, we chose to construct a quality measure for hypertension in which being prescribed at least a 3-drug regimen including a diuretic was an acceptable process, even if the blood pressure was not controlled. This measure may be more directly under the control of the treating clinician (and may also be a better reflection of the clinical evidence for treating hypertension since evidence for using more than 3 drugs in the treatment of hypertension is sparse) than an outcome only measure. Other approaches to combining process and outcomes in the same performance measure have been described as well. Measures can be designed to indicate that an outcome (e.g., blood pressure, cholesterol) was controlled, or that treatment was intensified within a fixed period of time. (11-14) Kerr and colleagues developed a measure of cholesterol control that included having a controlled cholesterol level, receipt of high-intensity statin therapy, a contraindication to statins, or an increase in drug therapy within 6 months. (11) This approach has also been applied to blood pressure and diabetes control in a large health system with an advanced electronic health record. (13,14)
Having access to more clinical data opens other possibilities for outcome measures. One could abandon the simple threshold as an indicator of adequate or inadequate care. Again using the example of blood pressure, if multiple blood pressure measurements can be used, it becomes possible to determine which patients have achieved clinically important reductions in their blood pressure. A patient with a pre-treatment blood pressure of 195/108 mm Hg who returns with a blood pressure on treatment of 144/75 is deriving a much greater clinical benefit from treatment than a person with a pre-treatment blood pressure of 142/75 mm Hg who returns with a blood pressure of 139/73 mm Hg.
Challenges and Conclusions
There are significant obstacles to using complex measures that rely on substantial amounts of clinical data. The greatest may be the slow pace of EHRS adoption in the United States. Data collection for multiple complex measures could be prohibitive if it could not be done electronically from existing data sources. Even if adoption of EHRS spreads, great effort would be needed to integrate complex quality measures into EHRS so that comparable results can be produced across different systems. The information management tasks required to achieve these objectives will not be trivial.
Ultimately, additional research is needed to determine if it is worth the effort to design and implement quality measures that paint a more complete and nuanced picture of the quality of ambulatory care patients receive. Whether or not such complex measures will motivate quality improvement any differently than existing simple measures or be less likely to promote unintended consequences remain empirical questions that need to be explored.
Author
Stephen D. Persell, MD, MPH
Northwestern University, Chicago, Illinois
Disclaimer
The views and opinions expressed are those of the author and do not necessarily state or reflect those of the National Quality Measures Clearinghouse™ (NQMC), the Agency for Healthcare Research and Quality (AHRQ), or its contractor ECRI Institute.
Potential Conflicts of Interest
Dr. Persell reports business/professional affiliations with the American Medical Association's Physician Consortium for Performance Improvement, National Committee for Quality Assurance, and the American College of Cardiology/American Heart Association/Physician Consortium for Performance Improvement CAD/HTN Work Group.
References
The State of Health Care Quality 2008. National Committee for Quality Assurance. Washington, D.C., 2008.
Jencks SF, Huff ED, Cuerdon T. Change in the quality of care delivered to Medicare beneficiaries, 1998-1999 to 2000-2001. JAMA. Jan 15 2003;289(3):305-312.
Landon BE, Hicks LS, O'Malley AJ, et al. Improving the management of chronic disease at community health centers. N Engl J Med. Mar 1 2007;356(9):921-934.
Mangione CM, Gerzoff RB, Williamson DF, et al. The association between quality of care and the intensity of diabetes disease management programs. Ann Intern Med. Jul 18 2006;145(2):107-116.
Rosenthal MB, Landon BE, Howitt K, Song HR, Epstein AM. Climbing up the pay-for-performance learning curve: where are the early adopters now? Health Aff (Millwood). Nov-Dec 2007;26(6):1674-1682.
Rosenthal MB, Landon BE, Normand SL, Frank RG, Epstein AM. Pay for performance in commercial HMOs. N Engl J Med. Nov 2 2006;355(18):1895-1902.
Hofer TP, Hayward RA, Greenfield S, Wagner EH, Kaplan SH, Manning WG. The unreliability of individual physician "report cards" for assessing the costs and quality of care of a chronic disease. JAMA. June 9, 1999 1999;281(22):2098-2105.
Boyd CM, Darer J, Boult C, Fried LP, Boult L, Wu AW. Clinical practice guidelines and quality of care for older patients with multiple comorbid diseases: implications for pay for performance. JAMA. Aug 10 2005;294(6):716-724.
Werner RM, Asch DA. The unintended consequences of publicly reporting quality information. JAMA. Mar 9 2005;293(10):1239-1244.
Persell SD, Kho AN, Thompson JA, Baker DW. Improving hypertension quality measurement using electronic health records. Med Care. Apr 2009;47(4):388-394.
Kerr EA, Smith DM, Hogan MM, et al. Building a better quality measure: are some patients with 'poor quality' actually getting good care? Med Care. Oct 2003;41(10):1173-1182.
Kerr EA, Krein SL, Vijan S, Hofer TP, Hayward RA. Avoiding pitfalls in chronic disease quality measurement: a case for the next generation of technical quality measures. Am J Manag Care. Nov 2001;7(11):1033-1043.
Rodondi N, Peng T, Karter AJ, et al. Therapy modifications in response to poorly controlled hypertension, dyslipidemia, and diabetes mellitus. Ann Intern Med. Apr 4 2006;144(7):475-484.
Selby JV, Uratsu CS, Fireman B, et al. Treatment intensification and risk factor control: toward more clinically relevant quality measures. Med Care. Apr 2009;47(4):395-402.
abrir aquí para acceder al documento AHRQ NQMC completo:
NQMC - Expert Resources - Expert Commentary
Suscribirse a:
Enviar comentarios (Atom)
No hay comentarios:
Publicar un comentario