A Guide for Community Quality Collaboratives
White Paper
-----------------------
This paper is intended for use by Chartered Value Exchanges (CVEs), community collaboratives, and other organizations interested in creating public reports on the performance of health care providers in their communities. It addresses the issue of inconsistent reports based on the same data and identifies the key methodological decision points that precede publication of a performance report.
Select to download print version (PDF File, 880 KB). PDF Help.
http://www.ahrq.gov/qual/value/perfscoresmethods/perfscoresmethods.pdf
----------------------
Prepared by:
Mark W. Friedberg, M.D., M.P.P., RAND Corporation
Cheryl L. Damberg, Ph.D., RAND Corporation
With assistance from:
Elizabeth A. McGlynn, Ph.D., RAND Corporation
John L. Adams, Ph.D., RAND Corporation
Contract No. HHSA290200810037C
Contents
Acknowledgments
Foreword
Executive Summary
Introduction
Types of Measures, Providers, and Data
Definition of "Provider"
How This Paper Is Organized
Overarching Methodological Issue: Performance Misclassification
A. What is performance misclassification?
B. Why is performance misclassification important?
C. What causes performance misclassification?
Decisions Encountered During Key Task Number 1: Negotiating Consensus on Goals and "Value Judgments" of Performance Reporting
A. What are the purposes of publicly reporting provider performance?
B. What will be the general format of performance reports?
C. What will be the acceptable level of performance misclassification due to chance?
Decisions Encountered During Key Task Number 2: Selecting the Measures That Will Be Used To Evaluate
Provider Performance
A. Which measures will be included in a performance report?
B. How will the performance measures be specified?
C. What patient populations will be included?
Decisions Encountered During Key Task Number 3: Identifying Data Sources and Aggregating Performance Data
A. What kinds of data sources will be included?
B. How will data sources be combined?
C. How frequently will data be updated?
Decisions Encountered During Key Task Number 4: Checking Data Quality and Completeness
A. How will tests for missing data be performed?
B. How will missing data be handled?
C. How will accuracy of data interpretation be assessed?
Decisions Encountered During Key Task Number 5: Computing Provider-Level Performance Scores
A. How will performance data be attributed to providers?
B. What are the options for handling outlier observations?
C. Will case mix adjustment be performed? (If so, how?)
D. What strategies will be used to limit the risk of misclassification due to chance?
Decisions Encountered During Key Task Number 6: Creating Performance Reports
A. Will performance be reported at single points in time, or as trends?
B. How will numeric performance scores be reported?
C. How will performance be categorized?
D. Will composite measures be used?
E. If composite measures will be used, which individual measures will be combined?
F. How will each composite measure be constructed from a given set of individual measures?
G. What final validity checks might improve the accuracy and acceptance of performance reports?
Summary of Methodological Decisions Made by a Sample of CVE Stakeholders
What are the purposes of publicly reporting provider performance?
What will be the general format of performance reports?
What will be the acceptable level of performance misclassification due to chance?
Which measures will be included in a performance report?
How will performance measures be specified?
What patient populations will be included?
What kinds of data sources will be included?
How will data sources be combined?
How frequently will data be updated?
How will tests for missing data be performed?
How will missing data be handled?
How will accuracy of data interpretation be assessed?
How will performance data be attributed to providers?
Will case mix adjustment be performed? (If so, how?)
What strategies will be used to limit the risk of misclassification due to chance?
Will composite measures be used?
What final validity checks might improve the accuracy and acceptance of performance reports?
Appendix 1: Validity and Systematic Performance Misclassification
A. What is validity?
B. Systematic performance misclassification: a threat to validity
C. Causes of systematic performance misclassification
Appendix 2: Performance Misclassification Due to Chance
A. What is misclassification due to chance?
B. Why focus on the risk of misclassification due to chance?
C. What determines the risk of misclassification due to chance?
References
The views expressed in this paper are those of the authors. No official endorsement by the Agency for Healthcare Research and Quality or the U.S. Department of Health and Human Services is intended or should be inferred.
This document is in the public domain and may be used and reprinted without permission. AHRQ appreciates citation as to source.
AHRQ Publication No. 11-0093
Current as of September 2011
--------------------------
Internet Citation:
Friedberg MW, Damberg CL. Methodological Considerations in Generating Provider Performance Scores for Use in Public Reporting: A Guide for Community Quality Collaboratives. AHRQ Publication No. 11-0093, September 2011. Prepared by RAND Corporation under Contract No. HHSA290200810037C. Agency for Healthcare Research and Quality, Rockville, MD. http://www.ahrq.gov/qual/value/perfscoresmethods/
--------------------------
Methodological Considerations in Generating Provider Performance Scores for Use in Public Reporting: A Guide for Community Quality Collaboratives
Methodological Considerations in Generating Provider Performance Scores for Use in Public Reporting: A Guide for Community Quality Collaboratives
No hay comentarios:
Publicar un comentario