MedCity Influencers

AI means nothing without people

AI is an amazing new tool — a skeleton key to unlock previously unimaginable innovations. But it is still just a tool and like all other tools, it requires human beings to wield it properly.

collaboration

I know “AI” as a concept has already reached the zenith of hype, even though many people can’t even agree on what it actually is. Regardless of definition or application, it is making an impact on innovation, changing the nature of business, creating new markets, spinning out new jobs and redefining existing ones. The first week of March, Gallup announced that nearly 9 in 10 Americans report using at least one of six devices, programs, or services that feature elements of AI. The age of AI has definitely arrived and is already impacting healthcare. But it is important to note – AI can’t succeed without the human element.

When I talk about AI, I’m not wading into discussions about robot overlords. I’m only referring to modern data science and the development of technologies that can analyze and (more importantly) learn from data faster and more effectively than humans, thus enhancing speed and accuracy when applied toward the automated completion of specific tasks. Pop examples of success in this realm are IBM Watson besting Ken Jennings on Jeopardy or Google DeepMind AlphaGo defeating Go masters Lee Sedol and Ke Jie — but those were just games.

When applied to more important endeavors, AI produces even more impressive results, and that’s the real reason it is gaining such momentum across the country (indeed, the world). If you feed it with human expertise and train it on domain-specific problems, AI can change everything. It can be applied to nearly every field imaginable — from healthcare to banking to transportation to education — and it’s only going to expand from here.

This isn’t innovation spawned by data scientists and coders alone. To be effective, AI has to be imbued with non-algorithmic domain expertise (and fueled with enormous amounts of information). AI only performs when it’s powered by high-quality data and individuals with deep knowledge about specific subjects. One of the reasons it presents such a powerful growth opportunity is that it depends on cross-discipline collaboration to fulfill its potential. There’s a role for just about everyone in enhancing its capabilities.

In healthcare technology, this type of “applied AI” has already made an impact on the delivery of service. Consider medical imaging: Many people are familiar with IBM Watson’s cognitive computing applications for detecting breast cancer. There’s also Chinese AI startup Infervision, a company that works with half the hospitals on Fudan University’s top 50 list. It serves about 450,000 patients with AI-CT, software that identifies core characteristics of lung cancer and suspicious lesions in computerized tomography, and AI-DT, which targets cardiothoracic lesions and reportedly reduces digital radiography analysis time from ten minutes to as little as five seconds. On another front, a recent U.S. study showed that machine learning can classify heart anatomy on an ultrasound scan faster, more accurately and more efficiently than a human technician.

This type of automation to drastically improve performance of specific tasks is amazing, but just as importantly, AI is also making an impact on how we identify what needs to be delivered and what might be possible in the future — and highlighting just how important it is to draw on input from a wide spectrum of distinctly human sources. For example, the Human Diagnosis Project is reportedly trying to blend machine learning with real physician experience. The project is compiling input from 7,500 doctors and 500 medical institutions in more than 80 countries “to develop a system that anyone — patient, doctor, organization, device developer, or researcher — can access in order to make more informed clinical decisions.” Human insight is the required ingredient in this type of AI development. As recently outlined in the Journal of the American College of Radiology, “To be most effective in clinical practice, use cases for AI algorithms must be designed to impact a specific clinical need, and the output of the model must then seamlessly interface with our clinical workflow and existing resources such as our PACS, EHR, reporting software, and our digital modalities.” The technology will shape new solutions and reshape service industries, but people define its purpose with their unique knowledge. And people must also ensure its done right.

After controversy surrounding privacy law and Google DeepMind Health’s pilots with Britain’s NIH, DeepMind sought to better collaborate with those impacted by its advanced technology: The Collaborative Listening Summit in January this year brought together members of the public, patient representatives, and NIH stakeholders to explore what principles should govern DeepMind Health operating practices and engagement with the NHS. The discussion was broad, according to a participant, “These principles ranged from the technical — for example, how evidence should inform DeepMind’s practice — to the societal — for example, operating in the best interests of society.”

This type of comprehensive collaboration is not just a growth area in technology-enhanced medical care delivery, it extends across the spectrum of business and services. It can and will be applied to pretty much every endeavor, but those who will wield it and those whom it will serve must be involved in the process.

Transferring domain-specific knowledge from human experts into expert systems will amplify that expertise — and enable its extension for human benefit. In this sense, AI is an amazing new tool — a skeleton key to unlock previously unimaginable innovations. But it is still just a tool. And like all other tools, it requires human beings to wield it properly for our own human purposes.

Photo: pagadesign, Getty Images


Avatar photo
Avatar photo

Rick Altinger

Rick Altinger, Chief Executive Officer, joined Glooko as CEO in January 2013, shortly after Glooko received its first FDA clearance. Glooko is focused on helping clinicians and patients with diabetes make better and faster data-driven decisions to drive improved outcomes that lower costs. Prior to joining Glooko, Rick served as vice president of product management and experience design at Intuit Health. He earned a BA in mechanical engineering and MS in engineering management from Stanford University.

This post appears through the MedCity Influencers program. Anyone can publish their perspective on business and innovation in healthcare on MedCity News through MedCity Influencers. Click here to find out how.

Shares1
Shares1