Health IT

DARPA researchers want to know how and why machine learning algorithms get it wrong

David Gunning, the program manager at DARPA overseeing the project, explained in a Wall Street Journal interview that understanding how complex algorithms make conclusions is key if artificial intelligence tools are to be widely adopted.

artificial intelligence, ai, machine learning, deep learning, brain, circuit

A group of researchers from the Defense Advanced Research Projects Agency is coordinating an effort to better understand the reasoning artificial intelligence algorithms use to arrive at the conclusions they make. Despite the excitement about the potential for utilizing aspects of AI for machine learning and deep learning, widespread adoption in healthcare is some ways off. A better grasp of the reasoning processes could help those efforts.

David Gunning, the program manager at DARPA overseeing the project, explained its motivation in an interview with The Wall Street Journal.

The project involves 100 researchers at more than 30 universities and private institutions. They want to produce “explainable AI” systems that can convert complex computer language into an easy-to-understand, step by step process for how they arrived at their decisions, the article noted. The goal is to produce a group of machine learning tools and user interfaces that government or commercial groups can use to explain how their own AI products reach their conclusions.

“If it’s finding patients that need special attention in the hospital, or wanting to know why your car stopped in the middle of the road, or why your drone turned around and didn’t do its mission … then you really need an explanation,” Gunning said.

Microsoft Research Managing Director Dr. Eric Horvitz highlighted his organization’s own work exploring why some might be troubled by the reasoning processes of some machine learning tools at the SXSW Interactive festival in Austin earlier this year. One concern is that neural networks that are used to “teach” computers can also be used to manipulate discussion on social networks or lead people to make poor decisions based on biases programmed into these networks such as judicial decision support tools used for sentencing guidance.

CB insights counted 106 healthcare startups using some form of artificial intelligence in a February report. Although medical imaging analysis is the most prevalent category, patient engagement is an interesting area as well. New York-based AiCure, which raised $12.3 million in a Series A round, is using artificial intelligence to support medication adherence. Sense.ly developed a virtual nursing assistant, Molly, to follow up with patients after they are discharged from the hospital.

Photo: John Lund, Getty Images

Shares0
Shares0