martes, 12 de mayo de 2009
NQMC - Resources - Expert Commentary
Evaluating Evidence in the Development of Performance Measures
By: Rebecca A. Kresowik, BLS
Timothy F. Kresowik, MD
Performance measurement of physician and health systems is becoming increasingly important as health care professionals pay more attention in this era of "pay-for-performance." In order to mitigate concerns of the physician and hospital community while avoiding unintended perverse consequences, sound medical evidence should be used when developing performance measures. Primary review and analysis of the medical literature in order to establish an evidence base for performance measures is an extremely time-consuming and labor-intensive process. Specialty societies and other organizations have already developed clinical practice guidelines that can potentially provide evidence-based statements for use in formulating clinical process performance measures. We will explore the advantages and challenges of using medical evidence, as well as outline a process for evaluating existing guidelines for use in the development of performance measures.
In the late 1990s, the American Medical Association (AMA) began a program to develop physician-level performance measures to be used for quality improvement in the physician office setting. This AMA program brought together physicians from all medical specialties and experts on performance measurement methodology, and later evolved into the Physicians Consortium for Performance Improvement (PCPI). (1) By the end of 2008, the PCPI had developed clinical process measures for more than 40 clinical topics.
A strong foundation of evidence is an essential element for each of the PCPI performance measures. The first step of the PCPI measure development process, after topic selection, is to review existing clinical guidelines on the topic. Since clinical practice guidelines are developed by many organizations, each with different approaches to defining medical evidence, it is imperative for all measure developers to have an established process of evaluating the guidelines on which performance measures will be based.
Some guidelines identify the different "levels of evidence" (randomized trials, observational study, expert opinion, etc.) that each statement is based on, while other guidelines do not designate the evidence ratings for each guideline statement. The highest level of evidence is designated when the data have been derived from multiple, sufficiently powered, randomized clinical trials. Although measures based on guidelines that are supported by multiple, consistent randomized trials (i.e., what is often referred to as Level I evidence) is ideal, limiting measures to this evidence base would result in very few performance measures since many current guidelines are not based on this rigorous level of evidence.
There are many processes of care that are unlikely to be tested in clinical trials and yet may still be important and useful as a basis for performance measures, e.g., determining certain elements of the patient history, physical examination, or diagnostic studies. When an evidence-based therapy requires these diagnostic elements for appropriate implementation, it can be said that obtaining these diagnostic elements are linked to the evidence. As an example, obtaining a history from a patient with coronary artery disease regarding the frequency and severity of angina is not a process of care that will be tested in a clinical trial. Having this information is essential, however, for deciding the appropriateness of evidence-based interventions; therefore, this process can be considered a performance measure linked to evidence.
In developing performance measures one has to be careful that using this implicit relationship between processes and evidence does not become a slippery slope. There are many patient attributes (e.g., family history) that are related to prognosis in a way that can be clearly described as evidence-based. This does not mean that identifying the presence or absence of each of these attributes, or counseling a patient with regard to these attributes should always become a performance measure. Performance measures that involve the collection of data through history, physical examination, or testing should focus only on patient attributes that are essential for an evidence-based therapeutic decision. Creating performance measures based on a consensus that the information would be "nice to know" is not a sound strategy. Measures that require counseling about patient conditions should only be created if there is evidence that counseling is an effective intervention for modifying that condition.
Another commonly used guideline rating system provides the "strength of the clinical recommendation." Strength of recommendation is applicable to recommendations with a defined evidence base as well as recommendations unlikely or impossible to be tested in randomized clinical trials. This rating system usually incorporates the potential benefit versus potential harm of the clinical recommendation, the potential costs associated with the recommendation, as well as the quality of the evidence for the clinical recommendation. These criteria obviously involve subjective judgment (opinion) on the part of the developer. Guideline developers use different definitions for the strength of recommendation, some utilize descriptions such as "strong recommendation," "recommended," or "not recommended," and others use a letter or numbering system. Some guideline developers combine the strength of recommendation and the level of evidence into one category while other developers describe the evidence rating and recommendation strength separately.
Strength of recommendation ratings are more common than true evidence ratings in most guidelines. From the point of view of measure development, the considerations inherent in the strength of recommendation ranking (benefit, harm, costs) are often more important than whether or not there are multiple, randomized trials supporting the guideline statement. Basing measures on guidelines with the highest strength of recommendation is generally achievable and unlikely to cause untoward consequences if the measure development process is robust.
With the current emphasis on "pay-for-performance" and "value-based-purchasing" strategies, the need for strong evidence-based or evidence-linked performance measures is becoming even more critical. Many are hoping that these strategies will play a role in helping to curb the upward spiral of health care costs. Associating process performance measures with payment has the potential of being a strong motivator of implementation and thus resulting in an overall increase in the processes incorporated in the measures. Using measures that focus on processes of care that are not backed by strong evidence has the potential to increase the waste of current health care resources rather than result in cost savings.
Authors
Rebecca A. Kresowik, BLS
Kresowik Consulting, Inc., Iowa City, Iowa
Timothy F. Kresowik, MD
Professor of Surgery
University of Iowa Carver School of Medicine, Iowa City, Iowa
Disclaimer
The views and opinions expressed are those of the author and do not necessarily state or reflect those of the National Quality Measures Clearinghouse™ (NQMC), the Agency for Healthcare Research and Quality (AHRQ), or its contractor ECRI Institute.
Potential Conflicts of Interest
Dr. and Ms. Kresowik report that they are consultants for the Physician Consortium for Performance Improvement (American Medical Association) and that they have provided performance improvement education for the American Academy of Family Physicians, North Central Medical Conference, and the Iowa Medical Society. Additionally, Dr. Kresowik is full-time faculty at the University of Iowa Carver School of Medicine.
Dr. and Ms. Kresowik state no personal financial interests or additional disclosures.
References
* The Physician Consortium for Performance Improvement. Available at www.ama-assn.org/ama/pub/category/2946.html. Accessed on March 17, 2008.
abrir aquí para acceder al documento:
NQMC - Resources - Expert Commentary
Suscribirse a:
Enviar comentarios (Atom)
No hay comentarios:
Publicar un comentario