Health IT

Some of the most exciting (and scary) aspects of machine learning that you may not know about

For fans of the ethical roads less traveled in AI, here’s a look at some of the issues and questions keeping research scientists up at night.

AI, machine learning

The decibel of chatter around artificial intelligence is rising to the point where many are inclined to dismiss it as hype. It’s unfair because while certain aspects of the technology are a long way away from becoming mainstream tech, like self-driving cars, it’s a fascinating topic. After listening to a talk recently by Dr. Eric Horvitz, Microsoft Research managing director, I can appreciate that the number of applications conceived around the technology is only matched by the ethical dilemmas surrounding them. But in both cases, they are much more varied than what typically dominates the conversation about AI.

For fans of the ethical roads less traveled in AI, Horvitz offered a fair few items for his audience to consider at the SXSW conference earlier this month that alternated between hope for the human condition and fear for it. Although I previously highlighted some of the healthcare applications he discussed, there are plenty of issues he raised that one day could be just as relevant to healthcare. I have included a few of them here.

Interpreting facial expressions

The idea of machine learning being applied to make people more connected to each other, improving in subtle ways our communication skills, is fascinating to me. One example used was a blind man conducting a meeting and receiving auditory cues on the facial expressions of his audience. The idea is to provide more insight on the people around him so he can have a better sense of how the points he raised are perceived beyond what the people in the meeting actually say. In a practical way, it gives him an additional layer of knowledge he wouldn’t have otherwise and makes him feel more connected to others.

The ethical decisions of self-driving cars

As exciting as the prospect of self-driving cars is, Horvitz called attention to some of the still unresolved, important questions of how they would perform in an accident or when trying to avoid an accident. What decisions would the computer make when, say, a collision with a pedestrian is likely and the car has to make a split-second choice? Does it preserve the life of the driver or the pedestrian, if it comes to that?  What responsibility does the manufacturer have?  What values will be embedded in the system? How should manufacturers disclose this information?

presented by
Horvitz slide

A slide that was part of Dr. Eric Horvitz’s talk at SXSW this year.

Adversarial machine learning

One fascinating topic addressed in the talk was how machine learning could be used with negative intent —referred to as adversarial machine learning. It involves feeding a computer information that changes how it interprets images, words, and how it processes information. In one study, a computer that was fed images of a stop sign could be retrained to interpret those images as a yield sign. That has important implications for self-driving cars and automated tasks in other sectors.

Another facet of adversarial machine learning is the use of information tracking individuals’ Web searches, likes and dislikes shared in social networks, and the kinds of content they tend to click on and using that information to manipulate these people. That could cover a wide swathe of misdeeds from manipulation through fake Tweets designed by neural networks in the personality of the account holder to particularly nasty phishing attacks. Horvitz noted that these AI attacks on human minds will be an important issue in our lifetime.

“We’re talking about technologies that will touch us in much more intimate ways because they are the technologies of intellect,” Horvitz said.

Appling AI to judicial sentencing software

Although machine learning for clinical decision support tools is an area of interest in healthcare to help identify patients at risk of readmission or to analyze medical images for patterns and anomalies, it’s also entering the realm of judicial sentencing. The concern is that these software tools that some states permit judges to use in determining sentencing include the bias of their human creators and further erode confidence in the legal system. ProPublica drew attention to the issue last year.

Wrestling with ethical issues and challenges of AI

Horvitz likened the current stage of AI development to the first airplane flight by the Wright Brothers at Kittyhawk, North Carolina which made it to 20 feet off the ground and lasted all of 12 seconds. But the risk and challenge of many technologies is that at a certain point they can progress far faster than anyone can anticipate. This is why there’s been a push to wrestle with the ethical issues of AI rather than address them after the fact in a reactive way. One group on the front lines of these ethical issues is Partnership on AI. Eight years ago, Stanford University set up AI100, an initiative to study AI for the next 100 years. The idea is that the group will study and anticipate how the effects of artificial intelligence will impact every aspect of how people work and live.

 

Photo: Andrzej Wojcicki, Getty Images