Kaveh Shojania, MD
According to Kaveh Shojania, MD, the keynote speaker at the 2018 ASCO Quality Care Symposium, dedicated quality improvement work can help repair a fragmented health-care delivery system, but it’s challenging, and there are multiple things that can go wrong during the process.1 “I’ve developed this presentation to illustrate some of the pitfalls in quality improvement and how to avoid them,” said Dr. Shojania, Vice Chair of the Quality and Innovation Department at the University of Toronto.
Historical Context
To give the field of quality improvement historical context, Dr. Shojania referenced a 1995 paper, “No Magic Bullets: A Systematic Review of 102 Trials of Interventions to Improve Professional Practice.”2 He noted, “The dissemination-only interventions—for instance, conferences and mailings—had little impact on quality. Even the more intensive interventions such as outreach visits had only small to modest effects.”
He continued with an example of quality improvement from his own research, a systematic review across 32 trials looking at the impact of computerized reminders, alerts, and decision support.3,4 “As this study shows, if you’re trying to improve guideline-concordant care—say, the percentage of patients who had a proven drug or the percentage of patients who didn’t get an inappropriate medication or test—the typical improvement was 4%. Even when we used the best improvement from each study, it was 5%,” said Dr. Shojania.
He explained that in an update of this previous review (currently being submitted for publication), there are now 84 trials, but the typical effect is still small (3%). This time, though, there appears to be a pattern of interventions requiring clinicians to acknowledge the decision support and document a reason for not adhering to the recommendation was associated with a median improvement of 23%.
Showing a slide of a Rube Goldberg pencil sharpener, Dr. -Shojania emphasized a critical characteristic of quality improvement failure: using a highly complicated way of doing something that’s very simple. He also presented a slide of a patient being transferred who has five medical wristbands, one of which didn’t have a definable purpose. “My point is, if you don’t fully understand the process, even though you’re well intentioned, it ends up being far more complicated than necessary,” he said.
Dr. Shojania noted there are about 11 identifiable pitfalls tied to the disappointing results in quality interventions, and he discussed the 3 most relevant challenges facing today’s busy practices.
Jumping to a Solution Before Understanding the Problem
A common problem in quality intervention strategies is pouncing on a solution before fully understanding the problem. For example, employing interventions such as computer reminders, report cards, and checklists without having a theory behind them can lead to problems. “It would be a bit like examining a patient and saying, ‘you look ill; I’m going to give you a red pill,’ with no idea of its activity or what exactly the illness is,” said Dr. Shojania.
He stressed the necessity of having a cogent quality improvement theory in order to prevent messy practice settings that obfuscate target problems and create implementation challenges. He used a quality improvement report that addressed hospital-acquired infections with a model that included staff education, clinical champions, and empowering patients to ask providers if they had washed their hands before touching them.
“This strategy increased hand hygiene compliance by 50%, so the hard work paid off, and patient engagement is a wonderful thing. But there is no identified connection between the problem and the solution. You really need to determine the barrier to hand-washing and what intervention works,” said Dr. Shojania.
According to Dr. Shojania, it is vital to think through a theory before implementation.5 For instance, patient selection sharpens the thinking about what types of patients will derive the most benefit from an intervention, and recognition of the intervention’s key ingredients helps to identify missing components. “These systems in which we operate are complex, and if we don’t take the time to understand the processes where we’re trying to intervene, we can, without realizing it, contribute to the complexity,” he stressed.
Rushing to Rigorous Evaluation
Dr. Shojania commented that a type of problem common among academics is rushing to a rigorous evaluation, or “premature evaluation—focusing all your efforts on proving that something doesn’t work instead of first taking time to optimize it,” said Dr. Shojania.
He noted that a prime example of rushing to evaluation may occur with the launch of a randomized controlled trial. “Don’t rush into a randomized controlled trial; optimize the intervention first, and use time series data to provide a robust evaluation. One thing I stress more and more is: don’t pick a problem for intervention just because it’s a problem. Pick a problem that you will likely succeed at solving and one that won’t make people’s jobs harder. Ideally, it will make things easier,” said Dr. Shojania.
Project Sustainability
Dr. Shojania’s final caveat in implementing a quality improvement intervention is to consider the project’s sustainability. He said the classic case of failing to keep this in mind is when a project is chosen that stops working the moment you stop paying attention to it.
Choosing the right intervention is critical, because a whack-a-mole approach leads to change fatigue.— Kaveh Shojania, MD
Tweet this quote
He also suggested introspection: Does your intervention depend on a specific person? Staff enthusiasm? New funding? “A lot of projects aren’t worth sustaining because they simply weren’t great ideas in the first place. So choosing the right intervention is critical, because a whack-a-mole approach leads to change fatigue,” cautioned Dr. Shojania. ■
DISCLOSURE: Dr. Shojania reported no conflicts of interest.
REFERENCES
1. Shojania KG: Common pitfalls in quality improvement and how to avoid them. Keynote Lecture. Presented September 29, 2018.
2. Oxman AD, et al: No magic bullets: A systematic review of 102 trials of interventions to improve professional practice. CMAJ 153:1423-1431, 1995.
3. Shojania KG, Jennings A, Mayhew A, et al: Effect of point-of-care computer reminders on physician behaviour: A systematic review. CMAJ 182:E216-E225, 2010.
4. Shojania KG, et al: The effects of on-screen, point of care computer reminders on processes and outcomes of care. Cochrane Database Syst Rev 8:DC0001096, 2009.
5. Davidoff F, et al: Demystifying theory and its use in improvement. BMJ Qual Saf 24:228-238, 2015.