Health IT, Artificial Intelligence, Health Tech

Startup makes fully autonomous medical voice assistant

Seattle-based Saykara claims its voice-based physician assistant can now fully automate some patient interactions, meaning no scribe is required in the back end to confirm the system’s results. The system can also automatically fill out the correct fields in the EHR, requiring no clicks.

An illustration depicts a physician talking to a patient, while being recorded by a voice assistant.

One of several startups trying to make physicians’ lives a little bit easier said its voice-based assistant can now document certain clinical encounters autonomously. Seattle-based Saykara hopes to stand out from the pack with a  system that can now fully process notes from physicians’ conversations with patients, without requiring further human interaction.

Numerous voice-based assistants, the medical equivalent of Amazon’s Alexa, have popped up as startups try to find an edge in easing the stack of paperwork physicians face every day. The idea is, instead of typing into their computer, physicians can spend more time talking to patients, with their notes being automatically documented into health record systems.

It’s a nice idea, but many voice assistants still require some screen time, whether from a medical professional or a scribe. Many physicians still have to click into the correct field in an EHR before dictating what they want to say. And often, on the back end, someone is checking that the system transcribed the words correctly.

Like their consumer counterparts, many medical voice assistants also use a wake word (such as “Hey Alexa” or “Ok Google”) to determine when they should act. Instead, Saykara it listens to entire conversations, and then sorts the information into the correct fields.  The only time it requires a voice command is to prescribe a medication or make a referral.

“Most physicians still don’t use speech recognition in the exam room,” Saykara CEO Harjinder Sandhu said. “What we wanted to do was build a system that could listen in on doctor patient conversation, interpret those conversations and create clinical notes.”

It took three years for Saykara to reach that point. Sandhu compared the process to developing a self-driving car. Initially, the car learns the route while the driver still has their hands on the wheel. For Saykara, that meant tens of thousands of encounters going through the system each month, which were verified by reviewers. Once the system proved itself to be accurate, Saykara allowed it to go hands-off, for certain encounters.

For example, Sandhu said, the system is good with orthopedics, such as a patient reporting shoulder pain. In general, the system learns faster in narrower specialties, though it also has lots of experience with certain cases often seen in primary care, such as coughs and colds. Ultimately, the physician will still review and sign off on the notes.

“It’s trying to interpret conversations by recognizing information in those conversations and put it in the right context,” Sandhu explained. “The challenge is listening through conversations is very difficult. Conversations and human language are very complex.”

Sandhu, a former computer science professor, founded the company in 2016. Before that, he created a speech recognition solution that sold to Nuance Communications in 2005.

Saykara currently works with 18 different specialties, including primary care, pediatrics and orthopedics. Right now, the startup focused on outpatient encounters, but Sandhu also hopes to test it in an inpatient setting.

“We want to get all interactions to a level of quality where we don’t need a human behind the scenes,” he said. “There are certain encounters where we can do that today.”

So far, 25 companies use Saykara’s technology, including Providence Saint Joseph Hospital in Renton, Washington and Swedish Medical Group in Seattle. The company has raised $9 million to date, and has 30 employees.

 

Photo credit: Saykara

Shares1
Shares1