sábado, 27 de septiembre de 2014

AHRQ WebM&M: Morbidity and Mortality Rounds on the Web ► In Conversation with…Enrico Coiera, MB, BS, PhD

AHRQ WebM&M: Morbidity and Mortality Rounds on the Web



WebMM Image

In Conversation with…Enrico Coiera, MB, BS, PhD
Editor's note: Enrico Coiera, MB, BS, PhD, is a professor and director of the Centre for Health Informatics (Australian Institute of Health Innovation) at the University of New South Wales. Dr. Coiera has researched and written about clinical communication processes and information systems. We spoke with him about how interruptions and distractions in the clinical environment influence patient safety.




Interview
Dr. Robert Wachter, Editor, AHRQ WebM&M: How big a problem are interruptions and distractions in the world of patient safety?
Enrico Coiera: Interruptions happen every day to every clinician, nurse, and doctor. It's become clear over the last decade that in some clinical settings, not only are interruptions frequent but they're also a patient safety risk. We're not saying every interruption is a bad interruption, but we do know that for certain places and times, they can lead to significant patient risk.
RW: Which areas have the highest risk?
EC: The challenge is in understanding why interruptions can cause harm. They essentially disrupt working memory with the consequence that you can forget to do what you're about to do or, very oddly, repeat the task you've already done. For example, you might have administered a medication to a patient and then been interrupted, but because you do the task so often any individual act is not particularly memorable, and you give the same dose again.
An interruption can also result in the interrupted task being incorrectly executed. My classic example is driving to the shops on a Saturday morning, when the cellphone goes off. By the time you've finished your call, you don't find yourself at the shops but instead in the car park at work. The call has occupied your attention and because you are distracted, you follow an initially similar, but ultimately wrong plan, which dominated because it is well rehearsed and easily enacted.
Psychology tells us certain variables predict higher risk of memory disruption. Probably the most important is working memory load, which is governed by many things you have to remember at any one time, and how complicated each is. If a task involves mental calculation or many steps, then your working memory load will be high and thus put you at more risk of disruption by interruption. Another issue is if the interrupting task is very similar to what you're currently doing, this also increases the risk that it will lead to memory disruption.
Another variable is the interruption modality—is the interrupting task visual or auditory for example? Those modalities are processed differently in the brain. It's quite possible to be talking about one thing and looking at a something different, to multitask. If the interruption uses the same modality however, like two visual tasks, then you have more chance of one disrupting the other. The final variable is how good you are at the task you're doing. If you have a lot of practice and experience, your opportunity to do well with interruption increases.
So, the kinds of clinical tasks that we worry about include administering medications, preparing injectable chemotherapy and IVs, induction of anesthesia, or putting a central line in. They're all tasks for which cognitive and task variables suggest that they are at risk of disruption by interruption.
RW: In terms of the tasks being interrupted, is it worse if the task is a mindless behavior? Something rote and fairly automatic? Or is it worse if it's something that you have to think very hard about, with a high cognitive load associated with it?
EC: They both have their special problems. If you're performing a high cognitive load task, then you are at very much risk of disruption. If it's rote, for example if there's a sequence of tasks, then a lot depends on where in the task sequence you get interrupted. If you get disrupted towards the end of a task sequence, you may experience what we call a post-completion error, which means you forget to do the last element in the sequence—the cleanup bit. An easy example is using a cash machine: you put your card in the machine and then pull the cash out. You have completed the primary task, which is to get the cash, and you walk away leaving the card in the machine—omitting the post-completion or cleanup step. They've now designed cash machines so that you have to pull your card out before getting any cash, so you cannot make a post-completion error. Similarly, you might complete a clinical task but then forget to write the notes or otherwise communicate what you have done because of disruption.
One thing that mitigates risk is how much of the primary task context is available when you resume it after interruption. We did some experiments on the impact of interruption on online order entry for medications and showed that interruption had limited to no impact. We think this is because after an interruption, when you return to the computer screen, it typically shows you where you left off. You can get back into the task. You're reminded by the cues on the screen where you were in task sequence, so you can recover very easily. It's very different if you're walking down a corridor and don't have a memory aid. We know that your control over the short time before you get interrupted is very important in such circumstances. If you see somebody about to talk to you and can say to yourself, "I was about to do x. I'm going to just rehearse that in my mind now," then attend to the interruption, you have a good chance of returning to x. But if you're interrupted with no control, you don't have that short moment to remind yourself what you're up to, and you thus have more chance of forgetting.
RW: I was interested in what you just said about the individual's capacity to get out of an interruption safely. In that example of getting pulled aside in the hallway, the individual is doing the equivalent of putting a rubber band around a finger as a reminder that he or she was interrupted and to remember to do the next thing. I imagine people probably feel that they have some capacity to protect themselves against this. The question is: do they or are we kidding ourselves?
EC: We're often kidding ourselves. The mental model to have is that we can carry about five primary tasks in working memory. (We used to think it was seven, but it appears to be only five.) That's your mental to-do list, and when you get interrupted you can lose any item from it. But you don't know which item you have just lost. However, mentally rehearsing an item just before an interruption increases its chance of survival, as does creating an external cue or reminder. In aviation for example, when they're going down a checklist, they keep their finger on the clipboard next to the item they are up to, so that whatever happens to them, they know where their finger was in the task list. To cope with interruptions, we thus need a set of strategies that create cues to where we are in the world and where we are up to—even a scrap of paper will do. If you're in the middle of a complex task, the obvious one is a mental calculation, it's very good to have written something down as you go. So, without some form of obvious cueing or without a chance to actually actively rehearse, "I was about to go and see Mrs. Jones in bed three," then your chances of being disrupted are quite high if you're busy.
RW: Let's say after listening to you someone says, "I not only want to build better systems, but I want to do whatever I can personally about this." Would you recommend that when they get interrupted in the middle of a task, before they go in and see a patient, they automatically say to themselves, "I was just interrupted," and they pull an index card out of their pocket and write down where they were, or is that just not going to work?
EC: That's a good idea, but it's too late if you begin to actively engage with the interruption task. You need to do it ahead of the interruption. We can be taught a number of different interruption-handling strategies. The first is to say "no," signaling unavailability. You can just put your hand up or shake your head, "I'm on the phone," that sort of thing. That's perfectly legitimate if you know what you're doing is primary and important. The second thing you can do is to quickly note down the state of the world you're in, which might be where you were in a task, where your finger is on the clipboard, or just rehearse in your mind where you're up to. If you do any of those things just ahead of the interruption task coming in, then you're fine. It's harder to recall afterwards, especially if you're very busy.
RW: We've already given examples from aviation in the cockpit and from the bank machines. Any other examples you've seen from industries outside of health care? How can interruptions be mitigated?
EC: Very interesting work has come out of the design of computer systems and human–computer interaction. This is all around being able to record the state of the world at the time you're interrupted, being able to design systems that are interruption-proof in some way. They teach us a lot about the sorts of systems we can eventually put in place to minimize distraction. For example, computer designers know that it is a really bad thing to have an alert window pop up and interrupt you on the computer, sometimes even demanding that you click on it before you can return to what you were doing. Luckily it doesn't have to be that way, and there are ways of designing computer interaction that are not interruptive or distracting and allow you to switch tasks. Nice computer environments put in action what psychology tells us.
RW: How does teaching from that world inform the issue of alerts and notices; for example, that we're about to give a patient a medicine to which they're allergic? It sounds like thinking in the field has changed about how to organize that kind of alerting.
EC: I don't believe that the design of modern health IT systems actually reflects this literature and understanding very well at all. Most modern e-health systems presume that the user is not doing anything else and that the screen has their full attention. I think you have to design clinical systems so that they can be used in messy clinical environments, with the understanding that the person using your system might be distracted, turn away to do something else, and then come back to the screen. There needs to be the equivalent of a digital finger pointing to where you were in your onscreen task sequence. Design the screens so it is very easy to identify cues to what you were doing and to quickly resume it post-distraction. The design research is clear, and many industries are ahead of us. I'm not that excited by what I see in terms of modern day clinical systems.
RW: When you say that our working memory has gone down from seven to five items, is this like global warming—our memories are going bad because we've become dependent on the numbers in our cellphones and on Google—or is it just better research tells us that the true number was five all along?
EC: It was five all along. However, when psychologists say there are five items in working memory, that doesn't mean five facts—they mean five pointers to different "chunks" of stored memory. If you're a very experienced clinician, each chunk could be a very complex process, meaning that you are functionally able to store an awful lot in five working memory items. A more junior clinician will not have that degree of compiled expertise and carry much smaller identifiable chunks. This means their effective recall using working memory is smaller. That is one reason experience and practice makes you much more capable of dealing with interruptions.
RW: Let's talk about risk mitigation strategies that have been tried in health care. I've heard about nurses wearing signs that basically say, "Leave me alone. Don't talk me to me, I'm mixing meds." I've heard about people putting tape on the ground to create a "zone of quietness" around them. What have you seen and what works?
EC: There are three separate classes of intervention that we can try.
The first is to simply to reduce interruption rates in a unit. It's like putting fluoride into the water. Let's only interrupt when it's appropriate. To do this we first have to ask: Why do we interrupt in the first place? Often it's to seek information to execute a task, but the task may not be urgent, and so can be delayed until the individual we want to talk to is not busy. The first and probably most powerful intervention to reduce interruption rates is to simply educate staff that they should be considerate about where, when, and why they interrupt each other. Another intervention makes interruption unnecessary by providing alternate information sources. If for example certain requests for information are frequent in a unit, then one can create an alternate information source to avoid having to ask the question. Often people who study organizations talk about tacit knowledge, which is the knowledge on how to get things done. When I was a young doctor, when you newly joined a medical team, you would often be given a little worn notebook that included all the special rules for dealing with the senior doctors you're working with and all the phone numbers to carry out the special tasks associated with that team. When I first looked at interruptions about 15 years ago, I studied a hospital in the UK, and the most common source of interruptions were "how to" questions, such as: "Who do I call to organize a venogram? How do I do x? Who's on service today?" Those bits of information are very easy to collate, so it removes the primary generator of interruption. All this information can now be captured in information systems. Being able to provide the information that people interrupt each other for in another place, perhaps using a team or unit wiki, can be very powerful.
Another interruption reduction strategy that works really well in other industries is to move from education to enforcing formal rules for both preventing interruption and enabling interruption. In airline team training, they try to deal with status asymmetry, so that a flight attendant is empowered to interrupt the pilot when he or she believes there's risk to the plane. We also see such status asymmetry playing out in health care where junior staff don't feel empowered to raise concerns despite the risk of patient harm. We should have rules for when not to interrupt, but also to empower our people to interrupt because the last thing anybody wants to do is to prevent crucial communication from occurring.
The second class of interventions signal, "I'm about to perform a high-risk task, and you shouldn't interrupt me right now because it might cause harm." This is the notion of the sterile cockpit, the "no interruption zone." There are lots of ways of doing it. As you said, when administering medications, one can wear a bright color vest that says, "Medicine Administration Occurring," or put up a sign or tape around you. All those things can work. The closest setting to the cockpit in health care is when we execute procedures like surgery or insert lines. It should be pretty easy to train people that the doctor is putting the line in, so wait until that is done.
The third set of interventions is to design work and workflow to be interruption-tolerant. We work in an environment where there are interruptions; we know we cannot avoid them. So let's put in handling strategies and processes that minimize their impact. Let's educate staff in basic interruption-handling strategies, empowering them to say no, or to rehearse, or to write things down. Record cues in the environment to allow people to return to the interrupted task knowing where they were. Whether it's in the digital space, having a computer system that shows you clearly what you were doing, whether you just scribbled two words down on a whiteboard or we're carrying handheld devices that can catch a written or a voice note. There are lots of opportunities for technology to do that. I often say that there is no universal answer because the risks for interruption vary with task, so each clinical setting has its own risk profile.
RW: I wonder whether there are individual characteristics. Some people love to have background music while they're doing complex tasks like surgery and others want complete quiet. Do you have to sort out individual preferences and the way individual brains work?
EC: I guess so. What we're talking about are very hardwired basic attributes of the human brain. So a lot of people say "I'm a good multitasker," presuming that they're somehow better than others, but the evidence is usually against that. Our perceptions of what we can and cannot do and what we actually can and cannot do are very different. Just because somebody likes music when they're operating doesn't mean that it is a safe thing to do. So if you're doing surgery, which is visual and tactile, you have a free auditory channel. This notion of modality allows you to listen and not be too disrupted. But if someone were to interrupt you with an auditory interruption then you'd be in trouble. Maybe in 10 years time listening to Bruce Springsteen while doing surgery will be like smoking.
RW: We ran a case on AHRQ WebM&M about this, in which someone was about to do something on rounds and a text message came in saying there's a big party tonight. Of course the person answered the text message and went back and forgot to order the medicine and it turned out to be a crucial medication. So it brings up this issue of smartphones and the ubiquity of technology. You've mentioned that you could pull it out of your pocket and make a note to yourself. But with the number of inputs, the blurring of the line between professional and personal communications has become tremendous. How do you think this has all changed in the world of smartphones?
EC: This is a very important issue. These devices have great potential to be interruptive. The example you gave is important and it could have been a message about a party or it could have been a message about a patient in another ward, it really doesn't matter.
But there's nothing inherent in smartphones that means they must interrupt you. We haven't adjusted the settings appropriately for the clinical environment. It's up to us to configure them in the right way. I don't understand why for example, when e-mail comes in, which is an asynchronous, non-interruptive form of communication, people tolerate an interruptive alert that makes the machine go "bing." I'm particularly disturbed by the number of smartphone applications, which now as default trigger alerts for the most trivial of events.
It's very important to recognize that these devices are built for the consumer world; while they're very powerful, when we bring them into a work setting, we need to configure them so that they don't interfere with work. Turn off alerts and reminders. If you're in a clinical setting where things are particularly at risk, then there should be strict rules about when you can and cannot use certain devices or functions.
RW: The question is this balancing act of what if it was about a patient in another ward who was sick? We have this culture such that people are hypersensitive about interrupting someone, but somebody needs to be interrupted. You've mentioned rules for interruptions, but how hard is getting this balance right?
EC: That's a very important cultural issue. For the last decade and a half whenever I talk about interruption, the first question I get from clinicians is, "Are you saying that I cannot communicate on behalf of my patient because interruptions are a crucial way for me to get the message across. The patient might be unwell." And the answer is that you must communicate when you have concerns. As we know one of the most important rules for activating medical emergency teams is concern by clinicians. So if you're worried, communicate. But most of today's interruptive communication is not in that class.
RW: What got you interested in this in the first place?
EC: In the early 1990s when there was an enormous push toward electronic records, I went to observe clinical work. What became clear was that, despite certainly very important reasons to support record taking with information systems, most communication happens face-to-face and verbally. That was not really understood in the literature. When you look at what happens face-to-face, you quickly notice the extent and toll of interruptions, multitasking, and all manner of things that had never been explored before in the informatics literature. It's very clear now that if you talk about supporting information needs in health care and you're only thinking about the electronic record, then you're missing probably 80% of the information space. The biggest information repository in health care sits in the heads of all the clinicians that you work with, and the biggest network is not the computer network but the network of conversations. What happens in health care is essentially communication-based and person-to-person. Computational tools should support that; they shouldn't dominate.
RW: What does this look like 10 or 15 years from now? Do IT tools get sufficiently mature and nuanced that in fact it's no longer 80% verbal but it's 50/50? Or do you think it will always be that this exchange of verbal data is too hard to encapsulate in an electronic system?
EC: I think what will happen is that we'll increasingly be able to use computer systems for information-seeking conversations: How do I do x? Who does y? But most conversations are sense-making conversations; they're discussions, they're sharing information, and decisions about treatment. We will always want to have discussions about those things. They're not going to go away.
RW: I wrote on my blog about my sense with our implementation of Epic—which was that it needed a field where the user was instructed to say what the hell is going on. That in some ways Epic was forcing a very granular, problem-oriented, if x then y, record of what was happening with the patient, and we were beginning to lose the big picture. How do you make sense of this entire enterprise?
EC: If you ask me what I think about most of the e-health systems that we have today, I would say that they are 1980s constructs. They are very focused around the idea that the clinical record is primary and therefore all work needs to conform to the needs of the record. But that's a broken idea—technology demanding that work fits around its limitations. In my view, good information tools are communication-based and focused on clinical tasks not clinical records. If you ask me what the future looks like, I would say look to devices like Google Glass. Look to augmented reality where we have a very different way of engaging with information. The idea that I can put on a set of spectacles that display task-specific information as I walk through a ward is quite powerful I think. Imagine if before I touch a patient, "Please wash your hands," flashes up on the screen of the glasses because they know I have not been to a washbasin—that changes my interaction with the world. A lot of such intelligence can happen in the background once technology focuses on the physical context, the people around you, and the task at hand. The way we engage with each other at work doesn't have to be through forms and records. It can be entirely different. In my fantasyland, the EMR disappears into the distance and work-supporting technology becomes primary. I think the record should be an artifact that arises incidentally out of what you do. It shouldn't dictate what you do. We're seeing work at the moment dictated by technology, and it doesn't need to be that way.

No hay comentarios:

Publicar un comentario