Health IT

3 ways AI can serve as a safety net to help hospitals reduce adverse events (Update)

In a talk at SXSW in Austin this week on the future of AI, Microsoft Research Technical Fellow and Managing Director Eric Horvitz, highlighted a few examples of how AI can be used to learn from adverse events and prevent them.

bigstock-the-concept-of-education-of-ch-33698849-588x590

This story has been updated from an earlier version

Human error is a troubling problem in healthcare for hospitals, healthcare providers, and for patients and their families. A report published by The BMJ last year noted that medical errors claim 250,000 lives each year, making it the third leading cause of death in the U.S. The need to address this issue in a meaningful way has led to the development of numerous clinical decision support tools and efforts to improve care team communication. Artificial intelligence can play a critical role in improving patient safety as well.

In a talk at SXSW in Austin this week on the future of AI, Eric Horvitz, Microsoft Research Laboratory Technical Fellow and managing director, highlighted a few examples of how AI can be used to learn from adverse events and prevent them.

Horvitz said he was excited by the potential applications from AI’s adoption in healthcare, especially applications where it could protect and assist physicians, much like a safety net.

“This is a passion of mine,” he said. It’s worth noting that Horvitz received an MD in addition to a PhD in computing from Stanford University.

Failure to rescue

Predictive analytics is one area where lessons from previous events, through patterns spotted in patient data, could be used to help doctors avoid failure to rescue situations. This could help inform physicians how they can intervene earlier when a patient develops complications that rapidly multiply. Horvitz said hospital readmissions area a good example of where this could be applied. He offered a more detailed description in response to questions after the talk.

“We’re considering [data from] thousands of patients, including many who died in the hospital after coming in for an elective procedure. So when a patient’s condition deteriorates, they might lose an organ system. It might be kidney failure, for example, so renal people come in. Then cardiac failure kicks in so cardiologists come in and they don’t know what the story is. The actual idea is to understand the pipeline down to the event so doctors can intervene earlier. Eight years ago we developed a clinical decision support tool called RAM — Readmissions Management. The software is applied when patients are to be discharged. It predicts which patients are likely to bounce back once discharged and which patients will need more care.”

Caradigm, a joint venture of Microsoft and GE, produces the software platform.

Surprise modeling

One challenge of rising to the level of an accomplished physician or surgeon is that it can be challenging to deal with unexpected complications from a perspective that differs from that doctor’s experience or knowledge base.

“Smart physicians can miss problems that are hiding,” Horvitz said. 

During his talk, Horvitz referenced work at Johns Hopkins Armstrong Institute for Patient Safety and Quality. The Institute received a $4 million grant last month to implement enhanced recovery after surgery protocols in 750 hospitals to reduce complications and shorten hospital stays.

In a follow-up question after the talk, Horvitz explained how this clinical decision support tool would work. Not only could it be enlisted in surgery but it could also provide another layer of support when discharging patients.

“You could imagine a system you could build from data about [specific cases] that could reason in real time and could tell a physician, “I know you are an expert, but I am built as a system to reason at the frontier of your knowledge …so listen to me.”

There’s an entire area of study within human cognition of how novelty and surprise affect humans’ thought process — referred to as Bayesian Theory of Surprise.

Complementarity

Machine learning is also becoming increasingly applied to medical imaging. Ideally, the use of this technology to support physicians basically comes down to physicians and machine learning tools playing to the strengths of each. If a physician were viewing medical images to determine whether there were breast cancer metastasis in a patient’s lymph nodes, a machine learning tool could scan those images and, based on the detection of patterns from previous patient images, identify these metastases. A physician could confirm that assessment or reject it.

“We are never going to replace human touch, support or dialogue,” Horvitz said in his talk. “But if we can get computers to do the drudgery [work], we can think about how humans can excel in healthcare.”

Photo: Bigstock 

Clarification: An earlier version of this story suggested that Horvitz received his MD before he pursued a PhD in computing from Stanford University. He actually received a PhD and then an MD two years later.

Shares0
Shares0