MedCity Influencers, Artificial Intelligence

Healthcare has evolved with the adoption of AI, and so should our ethical playbook

The World Health Organization (WHO) is already prompting leaders to consider ethical responsibilities and dilemmas of integrating AI more heavily into care, posing questions around maintaining human autonomy, ensuring transparency, and establishing inclusiveness and equity.

health health

It is no longer a question of if artificial intelligence (AI) will play a role in healthcare delivery and diagnostics. Instead, the question is how the technology can ethically be deployed to fill this role, and what guidelines should be put in place today, to support upstream thinking around challenges that will arise tomorrow.

AI still feels a tad like the “Wild West,” with both limited and splintered policies and laws around regulating and managing the use of the technology, especially in the healthcare arena. However, ahead of the curve, the World Health Organization (WHO) is already prompting leaders to consider ethical responsibilities and dilemmas of integrating AI more heavily into care, posing questions around maintaining human autonomy, ensuring transparency, and establishing inclusiveness and equity– just to name a few! 

While this is a tall order, it is one that governing bodies and leaders will need to fill. The Covid-19 pandemic has taught us a lot, including that while innovation is readily available to deploy – que telehealth’s resounding utilization amid the height of the pandemic. However,  because too much use happened too soon, it was in the peak of use that patients and providers found themselves faced with barriers to quality care.  Today, we have an opportunity to get ahead of the potholes and pave a way for a cleaner and more streamlined path that will work with AI instead of against it– bettering the healthcare experience for patients and providers alike. We can do this by starting with three core principles to guide AI, unpacked below.

Keeping humans at the heart of health decisions 

It is only a matter of time before AI becomes central to everything we do in the healthcare arena, and beyond. AI is an incredibly smart “machine,” with the ability to make decisions that could significantly improve patient care and save on time and cost; but that doesn’t mean AI should be given the power to do this, or have the authority to make the final call. People (doctors and patients alike) should access and oversee activity in each step of the care continuum, despite the accuracy and ease of integrating AI. We should absolutely use AI to inform, as it offers a vast amount of invaluable data that can be extracted and quickly disseminated, but locks should be put in place that require human input before proceeding to deliver a prognosis.

Electronic Health Records (EHRs) are an excellent example of the evolution in leveraging AI-driven predictive tools to help providers streamline workflows, medical decisions and treatment plans. However, the provider (the human) must remain at the heart of the process, pulling the levers to unlock the next step in care and development of treatment plans.

Holding healthtech to the highest standards 

There are a myriad of AI-powered devices and services available today, many of which are not HIPAA compliant or FDA-approved. That doesn’t mean the technology isn’t safe or useful, but if we go back to the “Wild West” theme, it does suggest a level of scrutiny that should be taken when determining what AI technology to adopt into a practice; and the amount of weight that the data from it should be given when making decisions. Similar to the standards required for any diagnostic tool or healthcare device, AI-tools should be tested and subjected to proving its accuracy. This means publishing and documenting sufficient information before the technology is deployed to generate meaningful public consultation and debate on how the technology is designed, and how it should or should not be used.

Ensuring inclusiveness and equity

AI tools and systems must be monitored and evaluated to identify disproportionate effects on specific groups of people. No technology, AI or otherwise, should sustain or worsen existing forms of bias and discrimination.

When developing and deploying AI-powered technology, it is critical to keep in mind different skin tones, gender designations and other differences in human characteristics, to make sure health providers deliver consistent and accurate care.  One study of three commercial gender-recognition systems, reported error rates of up to 34% for dark-skinned women — a rate nearly 49 times that for white men.

AI for health must be designed to encourage the widest possible appropriate, equitable use and access to care, irrespective of age, sex, gender, income, race, ethnicity, sexual orientation, ability or other characteristics protected under human rights codes. This also means enabling health equity, affording the highest amount of accessibility to care and removing requirements to travel great lengths, or purchase numerous products and services to benefit from the care that AI can provide. Bias of any kind is a threat to inclusiveness and equity, jeopardizing standardized care, and even lives.

Healthcare has evolved with the adoption of AI, and so should our ethical playbook. There are a myriad of potential roadblocks we will encounter as we continue to leverage this incredible technology in one of the most sensitive areas there is–healthcare. But, there’s no turning back, and nor should we. AI has tremendous potential for improving our healthcare system and patient experience, but we must establish the rules of the road now, before we continue on our journey.

Photo: mrspopman, Getty Images

David Maman is the Co-founder and CEO of Binah.ai, an innovative and successful start-up that is transforming the way healthcare services are being delivered and consumed. David is spearheading the Binah.ai team toward its mission to make healthcare services accessible to anyone, anywhere, anytime. Using its award-winning technology, Binah.ai allows users to extract vital signs and mental stress levels just by looking at a smartphone, tablet or laptop camera. A serial entrepreneur, David is currently leading his 13th startup, after driving numerous start-ups from vision to international success, including Hexatier (acquired by Huawei), Precos, Vanadium-soft, GreenCloud, Teridion, and more. He has 24 years of experience in leadership, AI, cybersecurity, development, and networking.

This post appears through the MedCity Influencers program. Anyone can publish their perspective on business and innovation in healthcare on MedCity News through MedCity Influencers. Click here to find out how.

Topics