It’s hard to get away from the topic of large language models, chatGPT and more broadly, artificial intelligence in healthcare. It’s all over the news, on social media, in the conferences we go to (including MedCity’s own INVEST conference that concluded earlier this week in Chicago) and even in the pitches that I get from our healthcare content contributors.
Yet the fear about AI is real. And I don’t mean Ex Machina type doomsday scenarios where AI gets sentient and takes over the human world. The more rational fear is its authoritative tone, the ability to present even false information as if it were true — think of deep fakes — not to mention things algorithms being leveraged to deny care.
Reducing Clinical and Staff Burnout with AI Automation
As technology advances, AI-powered tools will increasingly reduce the administrative burdens on healthcare providers.
In response to the awesome power this new technology wields — that some believe will emerge to be as pivotal as the industrial revolution — there is a greater recognition that standards need to be developed. Not surprisingly, global agencies, corporations have taken up the charge of setting forth guidelines for responsible AI including the White House. In this episode of the Pivot podcast, I spoke with Suchi Saria, associate professor of medicine at John Hopkins University and director of its Machine Learning and Healthcare Lab. She is also CEO of Bayesian Health. Saria has spent a lot of time researching this topic of responsible AI and how to develop a framework for its adoption in healthcare.