MedCity Influencers, Artificial Intelligence

AI revolution in healthcare will occur when the tyranny of the screen ends

Having been involved in the advancement of speech recognition in the healthcare domain and been witness to its successes and failures, I continue to believe that the next stage in the evolution of this technology would be to free physicians from the tyranny of the screen.

For at least a decade, healthcare luminaries have been predicting the coming AI revolution. In other fields, AI has evolved beyond the hype and has begun to showcase real and transformative applications: autonomous vehicles, fraud detection, personalized shopping, virtual assistants, and so on. The list is long and impressive.

But in healthcare, despite the expectations and the tremendous potential in improving the delivery of care, the AI revolution is just getting started. There have been definite advancements in areas such as diagnostic imaging, logistics within healthcare, and speech recognition for documentation. Still, the realm of AI technologies that impact the cost and quality of patient care continues to be rather narrow today.

Why has AI been slow in delivering change in the care processes of healthcare? With a wealth of new AI algorithms and computing power ready to take on new challenges, the limiting function in AI’s successful application has been the availability of meaningful data sets to train on. This is surprising to many, given that EHRs were supposed to have solved the data barrier.

The promise of EHRs was that they would create a wealth of actionable data that could be leveraged for better patient care. Unfortunately, this promise never fully materialized. Most of the interesting information that can be captured in the course of patient care either is not or is captured minimally or inconsistently. Often, just enough information is recorded in the EHR to support billing and is in plain text (not actionable) form. Worse, documentation requirements have had a serious impact on physicians, to whom it ultimately fell to input much of that data. Burnout and job dissatisfaction among physicians have become endemic. 

EHRs didn’t create the documentation challenge. But using an EHR in the exam room can significantly detract from patient care. Speech recognition has come a long way since then, although it hasn’t changed that fundamental dynamic of the screen interaction that takes away from the patient. Indeed, using speech recognition, physicians stare at the screen even more intently as they must be mindful of mistakes that the speech recognition system may generate.

Having been involved in the advancement of speech recognition in the healthcare domain and been witness to its successes and failures, I continue to believe that the next stage in the evolution of this technology would be to free physicians from the tyranny of the screen. To evolve from speech recognition systems to AI-based virtual scribes that listen to doctor-patient conversations, creating notes and entering orders.

Using a human scribe solves a significant part of the problem for physicians — scribes relieve the physician of having to enter data manually. For many physicians, a scribe has allowed them to reclaim their work lives (they can focus on patients rather than computers) as well as their personal lives (fewer evening hours completing patient notes). However, the inherent cost of both training and then employing a scribe has led to many efforts to build digital counterparts, AI-based scribes that can replicate the work of a human scribe.

Building an AI scribe is hard. It requires a substantially more sophisticated system than the current generation of speech recognition systems. Interpreting natural language conversation is one of the next major frontiers for AI in any domain. The current generation of virtual assistants, like Alexa and Siri, simplify the challenge by putting boundaries on speech, forcing a user, for example, to express a single idea at a time, within a few seconds and within the boundaries of a list of skills that these systems know how to interpret.

In contrast, an AI system that is listening to doctor-patient conversations must deal with the complexity of human speech and narrative. A patient visit could last five minutes or an hour, the speech involves at least two parties (the doctor and the patient), and a patient’s visit can meander to irrelevant details and branches that don’t necessarily contribute to a physician making their diagnosis.

As a result of the complexity of conversational speech, it is still quite early for fully autonomous AI scribes. In the meantime, augmented AI scribes, AI systems augmented by human power, are filling in the gaps of AI competency and allowing these systems to succeed while incrementally chipping away at the goal of making these systems fully autonomous. These systems are beginning to do more than simply relieve doctors of the burden of documentation, though that is obviously important. The real transformative impact will be from capturing a comprehensive set of data about a patient journey in a structured and consistent fashion and putting that into the medical records, thereby building a base for all other AI applications to come.

Photo: PeopleImages, Getty Images


Harjinder Sandhu

Harjinder Sandhu, CEO of Saykara , a company leveraging the power and simplicity of the human voice to make delivering great care easier while streamlining physician workflow

This post appears through the MedCity Influencers program. Anyone can publish their perspective on business and innovation in healthcare on MedCity News through MedCity Influencers. Click here to find out how.

Shares0
Shares0