Health Tech, Artificial Intelligence

When algorithms help — and when they hurt

A recent study published in Nature found that an algorithm was able to detect causes of knee pain not seen by a radiologist, helping account for racial disparities in knee pain.  But in other cases, algorithms can further disparities, allocating clinical resources in ways that disfavor underserved patients. Dr. Ziad Obermeyer, a professor at UC Berkeley’s School of Public Health, shared how he approaches algorithms in a biased world.

Decentralized clinical trial design

Some researchers see a role for AI in healthcare in helping clinicians make complex decisions. Others see a stream of tools, built on biased data, that fail to achieve the intended result.

Dr. Ziad Obermeyer, a professor of health policy and management at UC Berkeley’s School of Public Health, has pointed out several flawed algorithms in his research, such as tools used by insurers to allocate care resources to high-risk patients. He does this, because he sees a future for how they are used in healthcare.

presented by

“I’m optimistic about the role algorithms are going to play in the future of medicine, but in order to get there, we have to be very aware of these problems and how to fix them,” he said in a phone interview.

Obermeyer, who will be speaking at the virtual Health Datapalooza 2021 conference this month, focuses most of his research on fixing biased algorithms or developing new ones to help fight existing biases.

For example, in a paper published in Science Magazine last year, he found that an algorithm used by several insurers and health systems to determine what patients might benefit from additional care resources was racially biased. The tool, which has been developed by Optum, measured patients’ care needs based on the cost of care, to find patients who might benefit from high-risk care management programs. But because Black patients had lower medical costs than white patients, it yielded biased results.

This problem isn’t just limited to one algorithm. Even policy decisions, such as how Covid-19 relief funds are distributed, often use cost as a measure of need.

“I think it’s incredibly widespread,” Obermeyer said.

Because a large portion of CARES Act relief funds was sent to hospitals based on their past revenue, those allocations ultimately weren’t reflective of hospitals’ health or financial needs, according to a study published in JAMA, by Pragya Kakani, a PhD candidate in health policy at Harvard University.

“For a lot of these projects we started off with a clinical question,” Obermeyer said. “We found along the way that the algorithms that people were currently using contained this large amount of bias. It seemed really important to document that and find ways to fix it.”

But in some cases, algorithms can prove useful. In a more recent study, Obermeyer helped build an algorithm to predict patients’ knee pain, based on x-ray images. A lot of people were reporting knee pain that wasn’t being detected by radiologists, which could affect their ability to get surgery.

Osteoarthritis severity is classified using the Kellgren-Lawrence Grade, a system developed in the fifties in white British populations. Because of this, providers might miss physical causes of pain in nonwhite patients, researchers wrote in a study recently published in Nature Medicine. On average, Black patients reported higher levels of pain, even when their knees looked the same.

By training the algorithm on a diverse group of patients, it was able to account for a significant portion of disparities in knee pain.

“A lot of people were reporting pain that had something in their knee that was not seen by the radiologist,” he said. “Our medical knowledge, because it’s formed in a certain population with certain problems, doesn’t always apply to other populations.”

Photo credit: AnuStudio, Getty Images

Topics