MedCity Influencers

Developing trustworthy AI solutions for healthcare

Artificial intelligence could help with improving patient care, accelerating drug discovery and enabling the efficient operation and management of healthcare systems. But the focus should be on AI that can assist with human decision-making, not replace it.

The use of AI has been steadily increasing in healthcare, a development that is both promising and worrying if left unchecked.

AI technology has made remarkable advances in the last decade. Computers can accurately classify images and map their environment, providing cars, drones and robots the capability to navigate real-world spaces. AI has enabled human-machine interactions that were not possible before.

Because of this, AI is being explored for a wide range of healthcare applications. That includes improving patient care, accelerating drug discovery and enabling the efficient operation and management of healthcare systems.

Key targets for patient care include analysis of radiology images and tissue samples for detection and diagnostics, as well as individualized precision medicine for disease treatment and therapy. But it is especially important to proceed with caution whenever a machine is positioned to make life and death decisions.

The focus should be on AI that can assist with human decision-making, not replace it, in a healthcare setting. A framework in which humans cooperate with machines to arrive at such decisions is great to pursue, recognizing that machines could offer key insights that complement medical professionals.

It’s also worth considering that machines might have potentially serious flaws in their judgements. Depending on the AI tools used, they additionally may lack the capability to explain the reasons for a particular decision in a manner that patients and doctors can trust.

sponsored content

A Deep-dive Into Specialty Pharma

A specialty drug is a class of prescription medications used to treat complex, chronic or rare medical conditions. Although this classification was originally intended to define the treatment of rare, also termed “orphan” diseases, affecting fewer than 200,000 people in the US, more recently, specialty drugs have emerged as the cornerstone of treatment for chronic and complex diseases such as cancer, autoimmune conditions, diabetes, hepatitis C, and HIV/AIDS.

Factors impacting trustworthiness of AI healthcare decisions

There are numerous factors that influence the trustworthiness of AI systems. Bias has been widely cited as one major concern in AI-based decision-making systems.

A blog from Michael Jordan, a professor of computer science and statistics at UC Berkeley, highlighted the story of his pregnant wife being told she was at increased risk of giving birth to a child with Down syndrome. Their ultrasound showed white spots around the heart of the baby, an indicator of the condition. However, this result was based on a statistical model using a much lower resolution imaging machine. In this case, increased resolution and added noise in the measurements led to a recommendation to perform a risky amniocentesis procedure. Fortunately, they decided not to follow through on the procedure and Jordan’s wife gave birth to a healthy baby some months later. Others may not have been so lucky.

Experiences like this underscore the need for a principled approach in building and validating AI-based decision-making systems. Beyond the issues with data quality, bias and robustness, it is necessary to develop systems that are explainable and interpretable as well as risk management strategies to identify priorities and make decisions. Having a good framework and policies in place will help AI systems make better decisions and build trust among stakeholders.

Other factors involve ethical and societal concerns. This is important to consider for any AI-based decision-making system and critical for systems responsible for ensuring safety. We could imagine a healthcare management system that decides which patients should receive a treatment that is in limited supply or be sent to the ICU ahead of others in need of more urgent care.

There are concerns around privacy and an expectation that AI systems will have some level of transparency and accountability. Some of these issues have no clear answer and require much further thought.

Certification to the rescue?

Many industries have benefited from standards that support a level of guidance around product or service development, production and distribution. The International Organization for Standardization (ISO) has established numerous management system standards that set requirements to help organizations manage their policies and processes to achieve specific objectives.

The AI community is developing a suite of standards that will be used to guide industries on best practices. Methods to assess the robustness of neural networks and the bias in AI systems have already been created. Others under development will specify risk management processes, methodologies to treat unwanted bias, approaches to ensure transparency. Compared to other industries, healthcare systems will certainly have more stringent requirements on data quality, reporting requirements and more.

While standards and certification programs will not be a silver bullet, they will eventually provide a framework to use AI in responsible ways, measure the effectiveness and efficiency of their systems, manage risks and continually improve processes. This is still a few years away, but the community is working toward this goal.

Assisting the decision-making process

So, what can we do in the meantime? We should focus on AI that can assist with the decision-making process, including tools that can help medical professionals make informed decisions.

Systems that can handle or assist with routine tasks, such as patient check-in, taking vitals and maintaining patient records, are also beneficial. They help medical professionals spend more time on urgent issues and create an opportunity for more face-to-face interactions with patients.

For example, imagine a technology solution that performs touchless, line-of-sight monitoring of vital signs such as heart rate, respiration rate and body temperature in places where people are gathered. Installing such camera systems in nursing homes or residences where seniors are “aging in place” allows for continuous monitoring of their conditions and can alert caregivers or medical professionals to any changes in a person’s health that may need attention.

As technology evolves and our understanding of the AI-based decision-making process improves, we certainly expect it to play a greater role in healthcare decisions.

According to the American Hospital Association, the nation will face a shortage 124,000 physicians by 2033, and at least 200,000 nurses will need to be hired per year to meet increased demand. The American Health Care Association and National Center for Assisted Living (AHCA/NCAL) also found that 99% or nursing homes and 96% of assisted living facilities are facing a staffing shortage.

The growth of AI and automation for healthcare applications will be critical in the coming decades, given these sobering numbers. It underscores the need for AI to help healthcare professionals today, and in the future, work more efficiently and intelligently without sacrificing safety.

While still a few years away, AI-driven solutions will align with emerging industry standards to deliver tools that safely assess and monitor those in need of care, assist in patient diagnoses and recommended treatment and dramatically enhance the quality of patient care.

Photo: metamorworks, Getty Images

Anthony Vetro is a Vice President & Director at Mitsubishi Electric Research Laboratories, where he leads the development of AI technologies for a wide range of businesses including healthcare, mobility, robotics, and public infrastructure.