Patient Safety Primer
Detection of Safety Hazards
Background and definitions
An unacceptably large proportion of patients experience preventable harm at the hands of the health care system, and even more patients experience errors in their care that (through early detection or sheer chance) do not result in clinical consequences. Considerable effort has been devoted to optimizing methods of detecting errors and safety hazards, with the goal of prospectively identifying hazards before patients are harmed and analyzing events that have already occurred to identify and address underlying systems flaws. Despite much effort, health care institutions are still searching for optimal methods to identify underlying system defects before patients are harmed and, when errors do occur, methods to recognize them as rapidly as possible to prevent further harm. This Primer reviews both prospective and retrospective methods to identify safety hazards that can lead to errors and adverse events. (Definitions of error, adverse events, and foundational patient safety concepts can be found in the Systems Approach Patient Safety Primer.)
Methods of prospectively identifying safety hazards
Failure mode and effect analysis (FMEA) is a common approach to prospectively determine error risk within a particular process. FMEA begins by identifying all the steps that must be taken for a given process to occur ("process mapping") and then how each step can go wrong (i.e., failure modes), the probability that each error will be detected before causing harm, and the impact of the error if it actually occurs. The estimated likelihood of a particular process failure, the chance of detecting such failure, and its impact are combined numerically to produce a criticality index, which provides a rough estimate of the magnitude of hazard posed by each step in a high-risk process. Steps ranked at the top (those with the highest criticality indices) should be prioritized for error proofing.
FMEA (or similar techniques) has been used in other high-risk industries and offers a reasonable framework for prospective safety analysis. However, this technique's reliability has been called into question, as studies have shown that independent groups can reach widely differing opinions about the failure modes and criticality index of a given process. Another, more qualitative approach termed SWIFT ("structured what-if technique") can be used either as an adjunct to FMEA or as a stand-alone technique.
The field of human factors engineering attempts to identify and address safety problems that arise due to the interaction between people, technology, and work environments. Human factors engineers often lead safety efforts in other high-risk industries, and recent commentaries have called for greater integration of human factors principles into health care and patient safety.
Other methods to prospectively uncover safety hazards rely on qualitative approaches that emphasize the views of frontline providers. Establishing a culture of safety entails obtaining information on perceived safety problems from staff at all levels, through formal safety culture surveys or more informal methods such as executive walk rounds. Ethnographic approaches, which rely on direct field observations of health care personnel by researchers attuned to the cultural aspects of how care is provided, can determine distinct classes of safety problems and have also been used to identify unintended consequences of safety policies.
Retrospective error detection methods
Techniques to retrospectively identify safety hazards can be loosely classified into two groups: those used to screen larger datasets for evidence of preventable adverse events that merit further investigation and those that analyze individual cases of adverse events (or where an adverse event is strongly suspected). The former include trigger tools and methods of screening administrative datasets, while the latter include root cause analysis, mortality reviews, and related methods of in-depth investigation into specific patients.
Trigger tools alert patient safety personnel to probable adverse events so they can review the medical record to determine if an actual or potential adverse event has occurred. For instance, a hospitalized patient receiving naloxone (a drug used to reverse the effects of narcotics) may have previously received an excessive dose of morphine or some other opiate. In this case, the administration of naloxone would be a "trigger" to investigate possible adverse drug events. (In the emergency department, naloxone use would more likely represent treatment of a self-inflected opiate overdose, so the trigger would have little value in that setting.) In cases in which the trigger correctly identifies an adverse event, causative factors can be determined and interventions developed to reduce the frequency of common causes of adverse events. The traditional use of triggers has been to efficiently identify adverse events through chart review or review of other data sources (such as pharmacy databases), and triggers can also be used to track rates of safety events over time. Though many trigger tools exist, the Institute for Healthcare Improvement's Global Trigger Tool has been widely used and validated in different patient populations.
As with any alert system, the threshold for generating triggers needs to balance true and false positives. The system will lose its value if too many triggers prove to be false alarms. (This concern is less relevant when triggers are used as chart review tools, since the "cost" of a false positive is relatively low—mostly sufficient resources for medical record review.)
Administrative datasets, typically generated for billing purposes, contain information on clinical diagnoses and treatments for large patient populations. Several methods have been evaluated for screening these datasets for evidence of adverse events. Among these, the AHRQ Patient Safety Indicators (PSIs), which use administrative data to screen for complications of hospital care, have been widely studied and shown to be associated with increased length of stay, mortality, and hospital costs. The PSIs and similar tools are best used as screening techniques to identify potential hazards that should be investigated further. For example, if a hospital notes an elevated incidence of the postoperative sepsis PSI, it may have a systematic problem with failure to rescue in postoperative patients. The PSIs should not be used to compare patient safety between hospitals, or to estimate the overall incidence of adverse events at an institution.
Voluntary error reporting systems are ubiquitous in health care institutions and are an integral piece of organizational safety efforts. Although voluntary reporting systems are most often used to report errors that have already occurred, near-miss reporting can help prospectively identify system flaws as well. At the national level, regulations for implementing the Patient Safety and Quality Improvement Act became effective on January 19, 2009. The legislation provides confidentiality and privilege protections for patient safety information when health care providers work with new expert entities known as Patient Safety Organizations (PSOs). Health care providers may choose to work with a PSO and specify the scope and volume of patient safety information to share with a PSO. AHRQ has also developed common definitions and reporting formats (Common Formats) for patient safety events, in order to facilitate aggregation and use of patient safety information.
Hospitals also routinely conduct root cause analyses and mortality reviews when patients experience bad outcomes suspected to be related to an adverse event. Root cause analysis is a formal multidisciplinary process that has the explicit goal of identifying systematic problems in care. Standardized mortality reviews may be used to analyze specific cases for systematic harm or to estimate the proportion of deaths related to adverse events. Similarly, autopsies have traditionally been used to identify diagnostic errors, and traditional morbidity and mortality conferences are increasingly being adapted to help uncover underlying system flaws.
Many organizations are seeking to engage patients in safety efforts, and some studies have shown that patients can identify problems in care that were not revealed through more traditional methods. Reviews of closed malpractice claims and risk management databases have also yielded useful information regarding the types of adverse events that frequently occur in specific practice settings.
Novel approaches to adverse event detection have focused on ways in which errors may be detected in real time. Innovative studies that take advantage of electronic medical records have used natural language processing and real-time triggers to identify errors contemporaneously, allowing for immediate targeting of safety solutions. As electronic medical records continue to evolve, these approaches are likely to make real-time identification of errors and near misses more accurate and efficient. Comprehensive data warehouses, which combine administrative data with other sources (such as laboratory results and pharmacy databases) and can be searched with specialized algorithms, also have promise as a means of reliably and efficiently identifying patient-level harm.
Current context
The Joint Commission currently requires all hospitals to conduct one prospective risk assessment every 18 months (typically through performing an FMEA) and also requires performance of a root cause analysis under certain circumstances (such as when a sentinel event occurs). All hospitals are also mandated to maintain a voluntary error reporting system. Beyond these requirements, however, there are no consensus standards on how hospitals or clinics should assess their safety hazards, either prospectively or retrospectively. What is clear, however, is that no single method is comprehensive enough to provide a full picture of patient safety at an institution. A seminal study that compared safety data from five separate sources (voluntary error reports, malpractice claims, patient complaints, executive walk rounds, and a risk management database) found that each source identified different types of errors. For example, diagnostic errors were almost never identified through voluntary reports but were a relatively common source of malpractice claims. This led one commentator to compare safety hazard detection methods with an Indian fable in which five blind men describe an elephant in widely varying terms (as a wall, fan, spear, snake, or tree), depending on which part of the animal they touched. Similarly, an institution's picture of patient safety will hinge on which method they emphasize for error detection, and a comprehensive picture can only be obtained by integrating multiple methods
No hay comentarios:
Publicar un comentario