martes, 20 de enero de 2015

National Quality Measures Clearinghouse | Expert Commentaries: Clinician Responses to Using Patient-Reported Outcome Measures for Quality Improvement

National Quality Measures Clearinghouse | Expert Commentaries: Clinician Responses to Using Patient-Reported Outcome Measures for Quality Improvement

National Quality Measures Clearinghouse (NQMC)

Expert Commentary

Clinician Responses to Using Patient-Reported Outcome Measures for Quality Improvement
By: Maria B. Boyce, John P. Browne, PhD
For a diversity of perspectives on Patient-Reported Outcome Measures (PROMs), please refer to these other NQMC expert commentaries, Quality Measures: Patient-Reported Outcomes for Quality Improvement of Clinical Practice and Steps for Assuring Rigor and Adequate Patient Representation When Using Patient-reported Outcome Performance Measures.
There is increasing interest in using Patient-Reported Outcome Measures (PROMs) to improve the quality of healthcare (1). Routine collection of PROMs to inform, compare, and manage healthcare professionals and facilities has already been implemented in a number of countries, including England, Australia, United States of America, Sweden, and the Netherlands (2), in spite of weak evidence to support the use of PROMs as quality improvement tools. To date, 10 systematic reviews have examined the impact of feeding back PROMs data to clinicians. These reviews demonstrate that PROMs can improve patient-clinician communication and the processes of care for individual patients, but they have also consistently shown minimal influence on patient outcomes (234567,891011). With respect to the use of PROMs data to compare hospital performance, the emerging evidence is similarly unconvincing. For example, a 2013 evaluation of the English PROMs Programme, which compares NHS Trust performance for four common surgical procedures, found that the Programme is having little impact on patient outcomes (12).
Why is this happening? We recently published a systematic review of qualitative studies of the experiences of professionals when using PROMs to improve the quality of healthcare (13). We identified four distinct themes: practical, attitudinal, methodological, and impact.
The practical theme is mentioned in 14 of the 16 reviewed studies and reveals that the resources needed to implement PROMs often receive inadequate consideration. Collecting PROMs data takes people away from other activities, and this is not feasible in many contexts unless additional staff time is resourced. Prioritizing the collection of PROMs data over existing activities requires careful planning. To achieve a successful implementation, a high level of collaboration among colleagues and proactive managerial input is required. Successfully using PROMs in practice also depends on the availability of education and training for professionals and on access to appropriate technology to process the information in the most efficient manner.
The attitudinal theme is mentioned in 11 studies and captures a suspicion among healthcare professionals, including nurses, allied health professionals and medical staff, about the reasons behind the use of PROMs. This is particularly evident in contexts where the purpose of PROMs collection is not transparent. In such circumstances, the professionals question the motives for the data collection and express fear about how the results might affect their practice and patient care. In particular, there are concerns that PROMs might become an audit tool for use by management to monitor performance, or used to "name and shame" professionals. Furthermore, many professionals are simply not open to receiving feedback from patients and adopt an attitude that using such information would not improve their clinical practice.
The methodological theme is identified in 13 studies and extends the attitudinal theme into specific scientific concerns. Professionals frequently question the measurement properties of PROMs, the extent to which data collection is carried out to a high degree of quality, and the extent to which clinicians or facilities are fairly compared.
Finally, the impact theme is identified in all 16 studies and captures a frustration with the value of PROMs in identifying and implementing opportunities for quality improvement. Specifically, there is a concern that insufficient attention is paid to modelling the causal mechanisms that lead some healthcare professionals to perform better than others when PROMs data are compared (13).
In a recent qualitative study, we explored surgeons' reactions to receiving peer benchmarked PROMs feedback (14). The study reinforces the findings of our systematic review and also identifies a fifth theme – conceptual issues. This theme reveals difficulties among professionals in comprehending the nature of subjective measurement, confusing PROMs with patient satisfaction measures, and incorrectly associating PROMs with clinical data.
The themes identified are also common among other types of quality improvement initiatives: our work simply illustrates specific instances of these issues in relation to PROMs. Implementing routine collection of PROMs data without paying attention to these issues might impose a substantial bureaucratic burden and cost on patients and professionals with little positive influence on care in return (15).
What can be done to improve the use of PROMs as quality improvement tools? First, provide professionals with much greater practical and methodological support, including training, when they are asked to collect and interpret PROMs data. At present, many policy makers have a flawed assumption that healthcare providers will "just get on with it" and find the staff and materials to collect PROMs data from their own resources. Second, we need a deeper and more meaningful engagement with healthcare professionals, such as nurses, physicians, pharmacists and allied health professionals, when designing quality improvement programmes that use PROMs data. We must accept that professionals may have very good reasons for not implementing or using PROMs and not simply dismiss their concerns as being "old-fashioned" or "disrespectful of patients." Third, we need to question the assumption of achieving improvements in the care of whole patient groups (e.g., all patients undergoing hip replacement surgery within a hospital system) by focusing on inter-provider comparisons. For example, a recent study of hospital level variation in PROMs for patients undergoing hip replacement, knee replacement, groin hernia repair or varicose vein surgery found "little inter-provider variation" which "did not change significantly over time" (12). In situations such as this, it may be more useful to focus on other aspects of the care episode, such as patient characteristics, type of treatment, or type of provider when trying to explain variations in outcomes. Fourth, we need PROMs that are fit for purpose. Using PROMs as diagnostic tools for poor clinical performance requires evidence about their sensitivity and specificity in this context, validated against "gold standard" measures of performance. To date, no such evidence is available for the most commonly used PROMs (16). We need PROMs that have a strong track record of detecting providers known to have quality failings, and, until those PROMs are available, we should be circumspect about league tables or other output that purport to discriminate among providers.
In summary, the main challenge for researchers in this field is to demonstrate how using PROMs to compare clinicians or facilities can produce benefits at the level of whole patient groups. This requires comprehensive datasets and sophisticated analytic methods to identify the main sources of variations in PROMs. It also requires measures which are specifically designed to detect quality failings (16). Finally, the more fundamental need entails a willingness on the part of those who shape the policy agenda to change their analytic focus if it is futile. Current policies focus on the use of inter-provider comparisons as a lever for engineering quality improvement. It may prove more useful to focus on patient or intervention level sources of variation in outcome if the full potential of PROMs is to be realised.

Maria B. Boyce
Department of Epidemiology and Public Health, University College Cork, Ireland
John P. Browne, PhD
Professor of Epidemiology and Public Health, University College Cork, Ireland
The views and opinions expressed are those of the author and do not necessarily state or reflect those of the National Quality Measures Clearinghouse™ (NQMC), the Agency for Healthcare Research and Quality (AHRQ), or its contractor ECRI Institute.
Potential Conflicts of Interest
Ms. Maria B. Boyce and Dr. John P. Browne state no financial, personal, business, or professional conflicts of interest with respect to this expert commentary.

  1. Devlin N, Appleby J. Getting the most out of PROMs: putting health outcomes at the heart of NHS decision-making. London (UK): King's Fund, 2010. 92 p.
  2. Boyce MB, Browne JP. Does providing feedback on patient-reported outcomes to healthcare professionals result in better outcomes for patients? A systematic review. Qual Life Res. 2013 Nov;22(9):2265-78.
  3. Greenhalgh J, Meadows K. The effectiveness of the use of patient-based measures of health in routine practice in improving the process and outcomes of patient care: a literature review. J Eval Clin Pract. 1999 Nov;5(4):401-16.
  4. Luckett T, Butow PN, King MT. Improving patient outcomes through the routine use of patient-reported data in cancer clinics: future directions. Psychooncology. 2009 Nov;18(11):1129-38.
  5. Espallargues M, Valderas JM, Alonso J. Provision of feedback on perceived health status to health care professionals: a systematic review of its impact. Med Care. 2000 Feb;38(2):175-86.
  6. Marshall S, Haywood K, Fitzpatrick R. Impact of patient-reported outcome measures on routine practice: a structured review. J Eval Clin Pract. 2006 Oct;12(5):559-68.
  7. Gilbody SM, House AO, Sheldon T. Routine administration of Health Related Quality of Life (HRQoL) and needs assessment instruments to improve psychological outcome--a systematic review. Psychol Med. 2002 Nov;32(8):1345-56.
  8. Gilbody SM, House AO, Sheldon TA. Routinely administered questionnaires for depression and anxiety: systematic review. BMJ. 2001 Feb 17;322(7283):406-9.
  9. Valderas JM, Kotzeva A, Espallargues M, Guyatt G, Ferrans CE, Halyard MY, et al. The impact of measuring patient-reported outcomes in clinical practice: a systematic review of the literature. Qual Life Res. 2008 Mar;17(2):179-93.
  10. Chen J, Ou L, Hollis SJ. A systematic review of the impact of routine collection of patient reported outcome measures on patients, providers and health organisations in an oncologic setting. BMC Health Serv Res. 2013 Jun 11;13:211.
  11. Kotronoulas G, Kearney N, Maguire R, Harrow A, Di Domenico D, Croy S, et al. What is the value of the routine use of patient-reported outcome measures toward improvement of patient outcomes, processes of care, and health service outcomes in cancer care? A systematic review of controlled trials. J Clin Oncol. 2014 May 10;32(14):1480-501.
  12. Varagunam M, Hutchings A, Neuburger J, Black N. Impact on hospital performance of introducing routine patient reported outcome measures in surgery. J Health Serv Res Policy. 2014 Apr;19(2):77-84.
  13. Boyce MB, Browne JP, Greenhalgh J. The experiences of professionals with using information from patient-reported outcome measures to improve the quality of healthcare: a systematic review of qualitative research. BMJ Qual Saf. 2014 Jun;23(6):508-18.
  14. Boyce MB, Browne JP, Greenhalgh J. Surgeon's experiences of receiving peer benchmarked feedback using patient-reported outcome measures: a qualitative study. Implement Sci. 2014 Jun 27;9:84.
  15. Wolpert M. Do patient reported outcome measures do more harm than good?. BMJ. 2013 May 1;346:f2669.
  16. National Quality Forum (NQF). Patient reported outcomes (PROs) in performance measurement. Washington (DC): National Quality Forum (NQF); 2013 Jan 10. 35 p.

No hay comentarios: