martes, 24 de noviembre de 2015

National Quality Measures Clearinghouse | Expert Commentaries: Measurement of Diagnostic Errors Is a Key First Step to Their Reduction

National Quality Measures Clearinghouse | Expert Commentaries: Measurement of Diagnostic Errors Is a Key First Step to Their Reduction

National Quality Measures Clearinghouse (NQMC)



Measurement of Diagnostic Errors Is a Key First Step to Their Reduction
By: Hardeep Singh, MD, MPH
The need to measure diagnostic error
Diagnostic errors are among the most common type of medical error, experienced by an estimated 12 million patients in the United States (U.S.) each year (12). A recent Institute of Medicine "Improving Diagnosis in Health Care" report highlighted that diagnostic errors often have potential to cause substantial harm (3). As data from multiple sources indicate, they typically involve relatively common conditions that are either misdiagnosed or undetected altogether (456). About 1 in 8 surveyed patients in Massachusetts reported to have "personally been involved in a medical error situation in the past five years" related to misdiagnosis (7). Despite the prevalence of diagnostic errors, however, these are seldom a focus of patient safety initiatives (8). One reason for this oversight is that diagnostic errors are notoriously difficult to define and measure (9). Thus, it is not surprising that there are gaps in performance measures to monitor and improve the quality and safety of diagnosis. In this commentary, I use examples from our research on this topic to outline some measurement challenges and discuss how a recently published conceptual framework could be useful to advance the science in this area.
Challenges of measuring diagnostic errors
Research on diagnostic errors has relied heavily on case finding long after error has occurred, either through non-representative sources such as malpractice claims and autopsy findings, or through relatively inefficient means such as random chart reviews. Potentially, data from electronic health records (EHRs) and other repositories of health information could be leveraged to better detect diagnostic error, perhaps even at a point when harm can still be mitigated (10). However, diagnosis is an evolving process which involves uncertainty, and it is not easy to detect presence of an error (9). Research is needed to determine the value of these emerging data sources and to develop objective methods for meaningful measurement and analysis (11).
Another important measurement challenge is how to define and operationalize diagnosis. Diagnosis involves multiple persons (at a minimum, the patient and provider, in addition to any consultants or ancillary service providers), often multiple locations, and may change with time as new information is obtained. Therefore, measurement methods must account for the variety of patient-, provider- and system-level obstacles that can delay or distort a valid diagnosis. Care fragmentation is a problem in outpatient settings, but large EHR repositories that house data from different people from different locations can be leveraged to provide a more complete picture, for example by applying algorithms to the repository to look for patients' abnormal test results that are not followed-up with timely actions (101213).
Finally, measurement of diagnostic errors or breakdowns in the diagnostic process needs to be actionable. Even if currently available methods could yield more valid and reliable data about diagnostic errors, most institutions today lack dedicated resources to analyze and act upon this information. Thus, further development in diagnostic error measurement should take into consideration the type, timing, and delivery of data to ensure that measurement is meaningful, clear, and conducive to quality improvement efforts.
A framework for definition and measurement of diagnostic errors
In response to the need for better measurement and tracking of diagnostic errors, we have developed the Safer Dx framework, briefly described below (14). We define diagnostic errors as missed opportunities to make a correct or timely diagnosis based on the available evidence, regardless of patient harm (15). The framework aims to create a robust foundation for using a systems-based approach to advance measurement of diagnostic error and acknowledges that determining diagnostic error in individual patients might not be easy (9).
The organization of the Safer Dx framework follows Donabedian's Structure-Process-Outcome model. The structural component defines the "sociotechnical work system" composed of various dimensions including people, technologies (particularly health information technologies), organizational policies and practices, and external regulations and pressures (16). Risks for diagnostic error arise from interactions among these dimensions of the sociotechnical health care system. Hence, diagnostic error is not viewed as solely a matter of a single clinician's cognitive performance. As an example, the risk for diagnostic error may increase when clinical productivity demands are coupled with a poorly designed user interface for the EHR and/or when clinicians are not provided with optimal training for EHR use.
The process aspects of the Safer Dx framework are defined by our team's previous work on operationalizing the diagnostic process (4) and include: 1) the patient–provider encounter (history, physical examination, ordering tests/referrals based on assessment); 2) performance and interpretation of diagnostic tests; 3) follow-up and tracking of diagnostic information over time; 4) subspecialty and referral-specific factors; and 5) patient-related factors. Breakdowns in the diagnostic processes can occur at any of these points. Our ability to measure diagnostic errors comprehensively depends on assessment of all of these processes, as well as reliable and valid measures of diagnostic performance.
The primary outcome in the Safer Dx framework is safe diagnosis (correct and timely, as opposed to missed, delayed, wrong, or overdiagnosis). However, the framework also accounts for more far-reaching outcomes related to health care delivery (e.g., value) and patient functioning or outcomes. For instance, optimizing the diagnostic process involves getting the correct and timely diagnosis using the least amount of resources while being careful to avoid overtesting, overdiagnosis and overtreatment. The Safer Dx framework recognizes the many complex aspects of identifying diagnostic errors, reminds us of the need to contextualize diagnostic errors within real world clinical settings, and highlights the health care system's responsibility to continually improve.
Figure 1: The Safer Dx Framework for Measurement and Reduction of Diagnostic Errors (10)
Figure 1: The Safer Dx Framework for Measurement and Reduction of Diagnostic Errors
Figure description: The Safer Dx framework accounts for the complex adaptive sociotechnical work system in which diagnosis takes place (the structure), the distributed process dimensions in which diagnoses evolve beyond the doctor's visit (the process) and resultant outcomes of "safe diagnosis," i.e., correct and timely, as well as patient and health care outcomes (the outcomes). The five interactive diagnostic process dimensions, represented by the rectangle in the far left of the figure, include: 1) the patient-provider encounter & initial diagnostic assessment (history, physical examination, ordering tests/referrals based on assessment) represented in the square in the top left corner of the rectangle; 2) performance and interpretation of diagnostic tests represented in the square in the top right corner of the rectangle; 3) follow-up and tracking of diagnostic information over time represented in the square in the bottom left corner of the rectangle; 4) subspecialty consultation and referral-specific factors represented in the square in the bottom right corner of the rectangle; and 5) patient-related factors represented in the center circle of the rectangle which overlaps each of the four squares. The bidirectional arrows connect the 4 squares. Represented within the wide arrow to the right of the diagnostic process dimensions rectangle is the retrospective and prospective measurement of diagnostic errors, both of which must be reliable and valid. This wide arrow points to a rectangle to its right, which captures the results of these measurements. These include collective mindfulness, organizational learning, improved collaboration, and better measurement tools and definitions. This rectangle includes three arrows: one emerging from the top and pointing to "changes in policy and practice to reduce preventable harm from missed, delayed, wrong or over diagnosis"; one emerging from the bottom and pointing to "feedback for improvement"; and one emerging from the right to a "safer diagnosis" circle. Two arrows point to the right of the "safer diagnosis" circle: the top arrow points to an "improved value of health care" square and the bottom arrow points to an "improved patient outcome" square. Arrows emerging from these two squares flow to the left and point back to the diagnostic process dimensions rectangle indicating that the flow between these elements is continuous and inter-related. All of this knowledge created by measurement will lead to changes in policy and practice to reduce diagnostic errors as well as feedback for improvement.

Note: The sociotechnical work system includes 8 technological and non-technological dimensions, and includes external factors affecting diagnostic performance and measurement such as payment systems, legal factors, national quality measurement initiatives, accreditation, and other policy and regulatory requirements.
Opportunities for prevention and response to diagnostic error 
Historically, organizations have prioritized other quality and safety concerns over diagnostic errors, in part due to the complexities of defining and monitoring these events (17). The Safer Dx framework provides a conceptual foundation to identify potential errors of vulnerability in the diagnostic process and proactively assess opportunities for improvement. For example, we identified patient-provider encounter issues (such as history, physical exam, ordering tests after gathering and interpreting tests) and follow-up of test results as key target vulnerable areas in our diagnostic error research in primary care (9181920). We then developed both retrospective and prospective EHR-based 'triggers' in order to identify clinical situations that could be further examined for missed opportunities in diagnosis through medical record reviews (101321). While manual review is needed to confirm presence or absence of errors, electronic triggers can help select high-risk records for further review and enable an organization to measure and learn from errors. Similarly, in the area of health information technology-related patient safety, we recognized the emergent need to clearly operationalize measurement concepts, and recently developed a set of nine proactive self-assessment tools that organizations can use to reduce risks related to use of EHRs and related technologies (22). Similar sets of assessment tools could be envisioned for the diagnostic process to measure opportunities for improvement proactively rather than only fixing issues after the fact.
Health care leaders and policymakers require further evidence on frequency, types and origins of diagnostic errors in order to devise optimal strategies to reduce their burden (23). While further work is essential to develop and validate meaningful measures of diagnostic performance to shape performance improvement efforts, a comprehensive framework that accounts for the complexities of diagnosis can ensure that measurement strategies are appropriately targeted.

Authors
Hardeep Singh, MD MPH 
Chief, Health Policy, Quality & Informatics Program, Center for Innovations in Quality, Effectiveness and Safety, Michael E. DeBakey VA Medical Center and Baylor College of Medicine, Houston, Texas
Disclaimer
The views and opinions expressed are those of the author and do not necessarily state or reflect those of the Department of Veterans Affairs, the National Quality Measures Clearinghouse™ (NQMC), the Agency for Healthcare Research and Quality (AHRQ), or its contractor ECRI Institute.
Potential Conflicts of Interest
Dr. Hardeep Singh declared no conflicts of interest with respect to this expert commentary.
Dr. Singh is supported by the VA Health Services Research and Development Service (CRE 12-033; Presidential Early Career Award for Scientists and Engineers USA 14-274), the VA National Center for Patient Safety and the Agency for Health Care Research and Quality (R01HS022087 and R21 HS23602). His work is also supported in part by the Houston VA HSR&D Center for Innovations in Quality, Effectiveness and Safety (CIN 13–413).
References


  1. Singh H, Meyer AN, Thomas EJ. The frequency of diagnostic errors in outpatient care: estimations from three large observational studies involving US adult populations. BMJ Qual Saf. 2014;23(9):727-31.
  2. Gandhi TK, Kachalia A, Thomas EJ, Puopolo AL, Yoon C, Brennan TA, et al. Missed and delayed diagnoses in the ambulatory setting: a study of closed malpractice claims. Ann Intern Med. 2006;145(7):488-96.
  3. National Academies of Sciences, Engineering, and Medicine. Improving diagnosis in health care. Washington (DC): The National Academies Press; 2015.
  4. Singh H, Giardina TD, Meyer AN, Forjuoh SN, Reis MD, Thomas EJ. Types and origins of diagnostic errors in primary care settings. JAMA Intern Med. 2013;173(6):418-25.
  5. Ely JW, Kaldjian LC, D'Alessandro DM. Diagnostic errors in primary care: lessons learned. J Am Board Fam Pract. 2012;25(1):87-97.
  6. Schiff GD, Hasan O, Kim S, Abrams R, Cosby K, Lambert BL, et al. Diagnostic error in medicine: analysis of 583 physician-reported errors. Arch Intern Med. 2009;169(20):1881-7.
  7. Datz T. Poll finds many in Massachusetts have firsthand experience with a medical error. [internet]. Boston (MA): Harvard School of Public Health; 2014 Dec 2 [accessed 2015 Aug 25]. Available: http://www.hsph.harvard.edu/news/press-releases/many-in-massachusetts-have-firsthand-experience-with-a-medical-error/ External Web Site Policy.
  8. Graber ML. The incidence of diagnostic error in medicine. BMJ Qual Saf. 2013;22 Suppl2:ii21-ii27.
  9. Zwaan L, Singh H. The challenges in defining and measuring diagnostic error. Diagnosis. 2015;2(2):97-103.
  10. Murphy DR, Laxmisan A, Reis BA, Thomas EJ, Esquivel A, Forjuoh SN, et al. Electronic health record-based triggers to detect potential delays in cancer diagnosis. BMJ Qual Saf. 2014;23(1):8-16.
  11. Zwaan L, Schiff GD, Singh H. Advancing the research agenda for diagnostic error reduction. BMJ Qual Saf. 2013;22 Suppl 2:ii52-ii57.
  12. Murphy DR, Wu L, Forjuoh SN, Meyer AN, Singh H. An electronic trigger-based intervention to reduce delays in diagnostic evaluation for cancer: a cluster randomized controlled trial. J Clin Oncol. 2015 Nov 1;33(31):3560-7.
  13. Murphy DR, Thomas EJ, Meyer AN, Singh H. Development and validation of electronic health record-based triggers to detect delays in follow-up of abnormal lung imaging findings. Radiology. 2015 Oct;277(1):81-70.
  14. Singh H, Sittig DF. Advancing the science of measurement of diagnostic errors in healthcare: the Safer Dx framework. BMJ Qual Saf. 2015;24(2):103-10.
  15. Singh H. Editorial: helping health care organizations to define diagnostic errors as missed opportunities in diagnosis. Jt Comm J Qual Patient Saf. 2014 Mar;40(3):99-101.
  16. Sittig DF, Singh H. A new sociotechnical model for studying health information technology in complex adaptive healthcare systems. Qual Saf Health Care. 2010;19(Suppl 3):i68-i74.
  17. Graber ML, Trowbridge RL, Myers JS, Umscheid CA, Strull W, Kanter MH. The next organizational challenge: finding and addressing diagnostic error. Jt Comm J Qual Patient Saf. 2014;40(3):102-10.
  18. Singh H, Thomas EJ, Khan MM, Petersen LA. Identifying diagnostic errors in primary care using an electronic screening algorithm. Arch Intern Med. 2007;167(3):302-8.
  19. Singh H, Thomas EJ, Mani S, Sittig DF, Arora H, Espadas D, et al. Timely follow-up of abnormal diagnostic imaging test results in an outpatient setting: are electronic medical records achieving their potential?. Arch Intern Med. 2009;169(17):1578-86.
  20. Singh H, Thomas EJ, Sittig DF, Wilson L, Espadas D, Khan MM, et al. Notification of abnormal lab test results in an electronic medical record: do any safety concerns remain? Am J Med. 2010;123(3):238-44.
  21. Singh H, Giardina TD, Forjuoh SN, Reis MD, Kosmach S, Khan MM, et al. Electronic health record-based surveillance of diagnostic errors in primary care. BMJ Qual Saf. 2012;21(2):93-100.
  22. Safety Assurance Factors for EHR Resilience (SAFER) guides. [internet]. Washington (DC): Office of the National Coordinator for Health Information Technology (ONC); 2014 [accessed 2015 Aug 21]. Available: http://www.healthit.gov/safer/safer-guides External Web Site Policy.
  23. Singh H. Diagnostic errors: moving beyond 'no respect' and getting ready for prime time. BMJ Qual Saf. 2013;22(10):789-92.

No hay comentarios: