martes, 14 de noviembre de 2017

In Conversation With… Wanda Pratt, PhD | AHRQ Patient Safety Network

In Conversation With… Wanda Pratt, PhD | AHRQ Patient Safety Network

AHRQ logo

Perspectives on Safety—Patient-facing Technologies: Opportunities and Challenges for Patient Safety

This month's interview features Wanda Pratt, PhD, a professor in the Information School and an adjunct in Biomedical and Health Informatics in the School of Medicine at the University of Washington. We spoke with her about patient-facing technologies, including the opportunities and challenges for patient safety.
PSNet 2015

  • Perspectives on Safety
  •  
  • Published November 2017

In Conversation With… Wanda Pratt, PhD

Interview


Editor's note: Wanda Pratt is a Professor in the Information School with an adjunct appointment in the Division of Biomedical and Health Informatics in the Medical School at the University of Washington. Her research focuses on understanding patients' needs and designing new technologies to address those needs. We spoke with her about patient-facing technologies, including the opportunities and challenges for patient safety.
Dr. Robert M. Wachter: What are some general principles of both technology and human cognition that one needs to understand your work and the kinds of tools that actually are helpful?
Wanda Pratt: A lot of my work brings in material that we have learned from the field of human–computer interaction. Understanding what the computer does well, what people do well, how we can study people and their needs and come up with solutions that blend the best of both worlds, rather than creating technology that makes the problem worse. Our goal is to develop technologies that help people in ways that they cannot do alone. For example, people cannot remember things very well. Many studies have shown people can hold around seven items in working memory. Therefore, it is helpful to have technology be an external memory source that they can return to when they need the information and don't have to worry about holding it all in their head at the same time, which can create stress and other problems.
RW: What were your hopes and dreams about what patients might be able to do using technologies that weren't true at the time you entered the field in the mid-1990s? If you allowed yourself to fantasize out a decade or two, what did you think you might accomplish?
WP: My hope was that patients could be less dependent on clinicians as the only source of information and have some more independence and autonomy. In some ways that's a good goal, and in other ways that human interaction is really important. Then there are other aspects of information resources that could be automated and supplied in other ways.
RW: As you've seen that play out now over 20 or so years, what parts have worked particularly well, and what parts have been more challenging than you expected?
WP: Everything is slower than I expected, which is not too surprising. Many issues are less technological and more policy and social issues. We've developed a lot of interesting technology tools. But the policies, the social interactions, and the commercial world has some challenges, too, in not picking up a lot of advances from the research world and making technology better. The current electronic health record systems have a lot of problems from a user interface standpoint. We know a lot about the whole interaction and integration with work from a research perspective, but don't seem to be pulling into the commercial world where it could actually have a bigger impact with the systems that are out there now.
RW: Any idea what the gaps are about?
WP: Thinking about it from a patient's perspective, the primary role of my work, a lot of systems that were designed to have taken what was already designed for a clinician's interface, for nurses and physicians, and just switched it over to make it visible to a patient. Everything we know about human–computer interaction is that different types of people need different kinds of systems and support. I don't think that designers of these technologies have taken the perspective of what do patients need and how can we understand their needs and perspectives and develop tools to support them, rather than just turning on an information flow to patients that was designed for clinicians.
RW: Do you think the world is so different that those problems need to be solved by different companies—that the company that built an electronic health record for doctors and nurses cannot have, or is unlikely to build, the skillset and sensibility to appreciate what a patient might need?
WP: I think they could. They would need people on their team with different expertise and different perspectives. They wouldn't want the same design team that built the clinician-facing record, because they will come with biases and their own expertise and perspectives. It is time to bring on a team that has the patient perspective and look at it from that angle. I don't see any reason why the company itself couldn't do that.
RW: It is obvious the language has to be different, and someone who knows the language of medicine and thinks that way may not understand the health literacy level of an average patient. But are there other more subtle differences in how you design for a patient than for a nurse or a doctor?
WP: There are many differences. I would say that the language barrier is the smaller one. I've seen in my own research that people will adapt to the language; if it is important enough to them, they will learn it and figure it out. The issue is that we largely don't understand what the goals and perspectives of patients are. We found that even having a history of who has been in the room, which clinicians don't usually need or want because they know that information, is important. But the patient wants that information, and that has a strong role and some safety and well-being implications as well. Another perspective was thinking about the hospital environment for caregivers, particularly parents of young children feeling trapped in the room waiting for when a doctor could come take a look and have a valuable information exchange. Then they run to the bathroom or to get something to eat, and when they come back they've missed that appointment. So in some of our work, caregivers have designed different kinds of apps to address those problems. That all points back to the fact that we don't know what problems patients are experiencing and what information they need to meet their goals. It is not just their life as a patient. They have a whole life, so thinking about how that connects to the rest of their life is also really important.
RW: When you talked about the needs of patients in the beginning, were you thinking about peer-to-peer communities, or was it more about information access, retrieval, and communications with a more traditional health care system?
WP: When I first got started, I definitely wasn't thinking about peer-to-peer information exchange. Now I see that as extremely valuable. Particularly in rare diseases, patients sometimes know more about that particular health condition than the average clinician they interact with because the clinician might not see it very often. Yet the patient has been dealing with it for a long time, read about it on their own, and learned a lot. That valuable information resource is sometimes discounted and not recognized or supported by the health care community or by technology. Obviously, there are online communities, but they are not really tailored for the health angle or trying to exchange that expertise. The one exception would be Patients Like Me, which actually is doing a good job of trying to respect the peer-to-peer information exchange among patients.
RW: What is the design thinking that seems to make a difference in building a working community of patients?
WP: There are a variety of things. Some of our work has looked at whether it is helpful if a clinician is involved in the community. The answer is usually it is not helpful because the dynamic of the community changes and it often shuts down lines of communication. Yet there can be some worry about misinformation. So having resources and technology to help flag potential misinformation for moderators or other peers would be quite helpful. That said, from studies that I have done, peers are pretty good at policing that themselves and pointing out challenges in a way that keeps the conversation going.
RW: How structured does that need to be? Or is that a natural outgrowth of a community that somebody will call people out when they're getting off into cures or recommendations that have no scientific validity? Is that a natural evolution, or do you have to structure the technology or organize the moderation that facilitate that?
WP: I've seen it happen naturally pretty well. But moderating and maintaining a healthy community is a time-consuming and challenging job, so there is a lot of room for technology to help flag these situations, to ease the job of either the experienced participants who are trying to make sure that the community stays healthy and safe or to the official moderators.
RW: What are the risks to patient safety of patients becoming more engaged through digital tools, particularly engaged in their own care?
WP: The one I hear expressed is a concern that if patients get this information, they won't understand it well, which could create more misunderstandings between patients and care teams. Also, there are worries that patients will react poorly to certain information if it indicates that they have a poor prognosis. Or it could cause them to have an antagonistic relationship with their clinical care team if their interpretation of the information is different than their care team's interpretation. The big concerns are interactions and delays in care, possibly if patients are hesitant to follow the recommendations of the clinical care team.
RW: What does research say about those risks?
WP: Our survey work, interviews, and observations indicate that the vast majority of patients are very respectful of clinicians and their time and don't want to be "difficult patients." That is a barrier right now—that patients often will not speak up when they should because they don't want to be that difficult patient. It would be good to find ways that technologies could help make it easier for patients to flag issues without generating worry of getting in the way. But bringing in their own knowledge of what they're experiencing and what's unusual for them and being able to take that personal expertise and pull that into the safety role. I see that as really important.
RW: You've talked about patients seeing things in their record and identifying problems. You haven't talked much about patients truly doing self-care. We're moving to a world where they're Googling diagnoses, taking a picture of their rashes, and using an app to see if that's a skin cancer. How do you see that playing out in terms of safety threats or opportunities?
WP: I see that as an opportunity more than a threat. There are times when people are hesitant to seek care. If these kinds of technologies and tools can help them see when it is urgent for them to seek care outside of the home that would be very helpful. Even along the lines of peers supporting each other. Someone posting on social media, "This is happening to my child." And others coming in and saying, "I've heard of this. You should go to the ER and have this looked at by a professional." So in some ways, I see this as more likely to mitigate safety problems and concerns than cause problems.
RW: In the last several years, the rate of physician burnout has increased. Many attribute it to their electronic health records. Some of it is the clunkiness of it. But some of it seems like we've opened up the spigot for patients to be able to connect with their physicians in new ways. My primary care doctors at UCSF will talk about going home and having 3 hours of digital work to do that they didn't used to do before. How do you see that and do you see any solutions to those problems?
WP: That's a big problem. One solution could be some triaging of who deals with what kinds of problems. There are obviously challenges with important issues not getting triaged appropriately. The hope is if patients become more self-reliant, then maybe that is not as important, and maybe the interactions that do happen then are the more important ones as well as the more interesting ones from a clinician's perspective. I've talked with people who have been part of the OpenNotes Project and heard from physicians who were skeptical about it and worried about both the additional workload that it would create and the potential bad interactions that could happen between a physician and a patient. But by and large, it seems like most people have had very positive experiences and felt like they could actually have better interactions that weren't them repeating something over and over again, or having to justify previous things because the patient had time to look at the information and process it. Obviously, there would be exceptions to that. But the hope would be that it would make the work more rewarding and reduce the work that is fueling the burnout.
RW: You've done some work on the quantified self idea. I remember hearing about somebody who was monitoring every single bodily function for months at a time. What's your reaction to all this?
WP: I work in an information school; I think information is a very positive thing. Obviously, it needs to be processed and used in the right way. I see those technologies and information as potentially being very helpful for noticing potential problems earlier, for helping you reflect on your own behavior and how that influences your health and your well-being. But it could be taken to an extreme for people who have obsessive-compulsive challenges. Having that kind of information could fuel the anxiety of someone who is already very paranoid about their health when there are minor changes that are just fluctuations over time and not really indicative of health problems per se.
RW: When you think about the ability to monitor and manage things that maybe patients didn't have 10 years ago, can you think of a case where you think this is playing out in a very positive way?
WP: It has been particularly useful for different gastrointestinal issues, irritable bowel diseases, inflammatory bowel diseases, and people being able to keep track of how their own behaviors—whether it be stress, what they're eating, how much exercise they're getting, how much sleep they're getting—and be able to see correlations in their own disease states that they weren't able to see before and have been really powerful motivators in making changes to their health and lifestyle to improve their disease states. We have seen concrete evidence that is helping. I’d also include diabetes, where a person can take certain actions and it has an influence on their disease. Before these technologies, it was harder to notice those changes. There are some interesting trends for looking at the microbiome as well, and that has strong connections with these gastrointestinal issues, too.
RW: You said progress has gone more slowly than you expected. Do you think the same slow pace is likely to continue for the next decade or do you see an inflection point, and if you do, why?
WP: I do see an inflection point—the combination of the Internet, social media, and availability of information as well as new sensing technologies and the ability of patients to get their own health data. That will be a big change in both the technologies for health and probably in the way health is delivered. Patients will start making demands for things to change faster than they have in the past. They will be a forcing function in ways that health policies haven't been quite as effective. I think patients acting as their own safeguards is an underrepresented space in patient safety right now. But I don't think it is a panacea. It needs to be supported better. But it has the potential to be very helpful.


Previous interviews can be heard by subscribing to the Podcast






View More

No hay comentarios: