In Conversation with…Brian Jarman, PhD
Editor's note: Sir Brian Jarman is an emeritus professor at Imperial College School of Medicine and a past president of the British Medical Association. He developed the methodology for the hospital standardized mortality ratios and was involved with the Bristol Royal Infirmary Inquiry. He has also worked on adjusted hospital death rates in England, Scotland, the United States, Canada, the Netherlands, and Sweden. We spoke with him about the development of hospital standardized mortality ratios, and their role in monitoring safety and quality.
Listen to an audio excerpt of the interview (.MP3 | 9.15 MB | 6 minutes, 40 seconds)
Interview
Dr. Robert Wachter, Editor, AHRQ WebM&M: Tell us a little bit about what got you interested in quality measurement first and then in the hospital standardized mortality ratios (HSMRs)?
Sir Brian Jarman: In 1988, I was asked to set the formula for distributing the resources of the National Health Service down to each hospital. The formula that I set then is more or less the formula that's used now. We used a different form of deprivation score then, which was called the Jarman Index. But the principle is the same; it was based on the adjusted utilization of health care. At the time I was trying to work out what to allocate to each hospital and what measures of quality that I could think of. One of them was an adjusted overall hospital mortality. So I developed it then but didn't publish it until later.
In 1998, I was invited to be a part of the Bristol Royal Infirmary inquiry into children's heart surgery. As part of the inquiry, we concluded that no one really knew who was responsible for monitoring the quality of health care in England.
At the end I thought, it's unacceptable that the mortality in the Bristol children's heart unit was 30%. The parents could have driven an hour away and gone to a hospital with a mortality of 6%–8%. We as a medical profession had an obligation to make that information available for these parents. So we started our unit. Our motto was that Bristol should never occur again. So beginning in 2001, we produced these HSMRs, these adjusted mortalities, and we published them—for all the hospitals—in the newspaper.
In 2006, my colleagues at Imperial College and I decided that, instead of just having the HSMRs available on the Web site, we would send letters to the CEOs of hospitals that had double the national death rate, where the chances of it being a false alarm was less than 1 in 1000. Under what we call Clinical Governance, if a CEO gets such a letter, he or she is required to take action. And we sent a copy of these letters to the Regulator, the Health Care Commission. We sent mortality alerts and the hospital's mortality ratio. The regulators also received complaints from patients that also drew attention to the problems at Mid Staffordshire in 2007.
The Head of Investigations at the Health Care Commission said that, while all those things were important, it was the mortality alerts that made their staff go in and investigate. They spent a year on that investigation.
RW: I'm interested in your philosophy around quality measurement in general. You've mentioned its potential role in feeding data back to the hospitals, to the chief executives to let them see that there might be a problem and then require of them an improvement strategy. You've also mentioned its use by regulatory bodies that might have certain standards and take actions when they see evidence of quality problems. You've also mentioned transparency, and I note in your recent Lancet article you said the families of the children who underwent cardiac surgery at Bristol should have been told about lower mortality rates at other units. How do you think through what the goals are and how to balance measurement for transparency, as a regulatory approach, and as a catalyst for internal improvement?
BJ: At Bristol we spent a long time trying to measure morbidity, because quite a number of children who had been brain damaged were being brought in in wheelchairs and it was actually a major topic in the press. We eventually got the reports that said you cannot accurately measure morbidity. So we didn't do the morbidity measures.
We had in a very large room 2000 medical notes from Bristol. We took a random stratified sample of 100 admissions and we employed seven groups of seven clinicians. Each group would have a pediatric cardiologist, a pediatric cardiac surgeon, a pediatric cardiac nurse, and so on. And they gave a report that was consistent with the mortality data. We hope that people use the data. You have to say well, let's dig down to see if there's a real problem. We point out that the observed minus expected deaths isn't unnecessary deaths and it isn't avoidable deaths. It's just the difference between the number of deaths that you get in the hospital and the number that you would get if they had the national death rate by age, sex, etc. And we hope that people will use the measures to introduce improvements.
One of the best examples was the introduction of care bundles in hospitals in North West London, which was published in the BMJ. They had some problems with young men having organ failure in a drug trial, and they'd had some excess deaths in maternity. So morale was low. The chief nursing officer went over to IHI [Institute for Healthcare Improvement] and learned about care bundles. And they themselves developed more care bundles to cover the diagnoses with the largest number of deaths, where the aim is lower mortality. They had a system whereby I monitored for them each admission for the diagnoses that had been covered by the care bundles. They implemented it, they planned it for quite a while, got a big buy in, and started it and monitored it for a full year. And they were able to get such a reduction that they went from having been just above the national average in 2004–2005 to having the lowest HSMR in England in 2007–2008.
There are other examples. At another hospital, Bolton, the crude mortality for hip fracture operations went up to 25% and we sent them a mortality alert. They implemented the system of reducing the time from admission to operation from more than 3 days to 36 hours. They set up an 8-bed fracture ward, and they employed a full-time orthopedic geriatrician. After that, their mortality dropped to under 5%. They also reduced their length of stay. They increased patients' satisfaction, and they reduced the mortality.
RW: So it sounds like you're arguing that the most important thing that these numbers do is drive improvement strategies within the institutions.
BJ: That's right. And they are now being used throughout the NHS. They are published each year and the replacement for the Health Care Commission, which is now called the Care Quality Commission, it's one of their indices that they publish monthly.
RW: There's been debate in the medical literature about using the HSMR as a quality measure, and some of it is on purely technical grounds. Are the sources of the case mix adjustment adequate given that it's difficult to glean clinical data from present-day charts? When a hospital sees that their HSMR data or other adjusted quality data are not as good as they should be, their first response tends to be to try to play with the coding rather than to actually improve their performance. How do you respond to those critiques?
BJ: The three main people commenting on the Mid Staffs Inquiry were Robert Francis, the Managing Chair of the Inquiry, Don Berwick, who wrote recommendations for the Prime Minister, and Sir Bruce Keogh, the Medical Director of the NHS, who gave evidence. All said that had they been monitoring HSMRs, they would have picked up Mid Staffs earlier. Another thing Bruce said, and I agree, was that monitoring patient and staff complaints and surveys would also be important.
In terms of the actual technology, we have a philosophy that if anyone can suggest a change in the methodology that stands up clinically and statistically, we're willing to use it as long as it does make an improvement. We have done calculations as to what proportion of the variation might be due to random variation and we think it's about 25%.
We also say that HSMR is not the measure of the quality of the hospital. This is just an adjusted death rate that you might want to look at. And if it's particularly high, then you should use it as a trigger to look further. The data that is available to them are the standardized mortality ratios for each of the diagnoses. You can break it down hundreds of different ways. Certainly if someone has died and the risk was very low, then you can go and pull the notes and see whether anything needs to be looked at further.
At the end of the Mid Staffs Inquiry, a number of hospitals were put into what's called special measures. The Mid Staffs Inquiry came out, and the Prime Minister said that the 14 hospitals with high adjusted death rates (9 on the HSMR and 5 on the Standardized Hospital Mortality Indicator, or SHMI) had to be investigated. Bruce Keogh was in charge of it, and I was on the advisory group. In his report he said that all 14 hospitals had problems.
I think what has happened subsequently was an incredible change in the acceptance of HSMRs. All of the people, except one who was the whistleblower, on the Care Quality Commission have gone and there is a new chairman and a new chief executive. The new Chairman said last year that the old Care Quality Commission was like a fish rotten from the head down—that there was complete denial. Now, as a result of the Mid Staffs inquiry, there has been a complete change of attitude in the Care Quality Commission, particularly around monitoring data.
RW: It sounds like you're optimistic. But I can imagine one might have felt the same way after Bristol—that this system has recognized that there are real problems, there's a commitment to change and reorganize, and yet, as you've pointed out, not much happened. What makes you optimistic that this time will be different?
BJ: Well, that's a good question. I was more optimistic after Mid Staffs. First, we all keep on at them. Second, the media are on our side. They know about it. And we did have a campaign at the beginning of last year to get the head of the NHS to resign, and he did resign as a result of pressure from a variety of sources. But change is very difficult.
No hay comentarios:
Publicar un comentario