MedCity Influencers

Reining In the Wild West of AI

In healthcare, a faulty algorithm can be a matter of life and death.

generative Ai

Wherever you look lately, every healthcare technology solution seems to incorporate some form of AI that promises to improve the clinician experience. There are some valuable use cases of AI in the provider space, without a doubt. Ambient AI scribes, for example, have generally been met with open arms among providers, as they reduce administrative burdens and free up more time to spend with the patient. 

But many iterations of AI are within a realm that feels like the Wild West, where bold claims abound but aren’t backed up by clinical research or regulatory oversight. This isn’t surprising, though, as many companies offering AI would rather not endure the rigorous procedures and significant time investment to obtain regulatory clearance. 

The consequences of AI that’s unchecked may not be as severe in other industries, but in healthcare, a faulty algorithm can be a matter of life and death. As healthcare becomes saturated with AI solutions that blur the line between what’s regulated and what isn’t, clinicians have been left in the dark and are pushing back. In one recent example, nurses in San Francisco protested Kaiser Permanente’s use of AI, claiming the technology is degrading and devaluing the role of nurses, ultimately putting patient safety at risk. It’s important to note that their concern is directed specifically to “untested” forms of AI, which should be a wake-up call to companies who are hesitant to secure regulatory clearance.

presented by

The marketplace needs guidance on how to navigate the AI landscape with so many players making bold but unsubstantiated claims. One of the smartest things companies offering AI can do is to recognize the value of clinical validation and regulation, which is fundamental to gaining clinicians’ trust and ensuring the safety of their products. This, combined with a thoughtful approach to change management, will create a level playing field where the coexistence of AI and clinicians brings healthcare to the next level.

Approaching AI development through a regulatory-grade lens

When starting down the path to FDA clearance, companies should have a clear goal about what they’re trying to prove and be able to articulate the clinical value that they’re aiming to deliver. The ability to demonstrate that a solution is positively impacting the care of a patient and not creating patient safety issues is crucial. Committing to these fundamental principles upfront ensures that there’s a level of responsibility built into AI models.

Software as a Service (SaaS) companies should also be generally aware of the FDA’s approach to medical device clearance, which measures the quality of the end-to-end development process, including clinical validation studies performed in real-world patient populations. Furthermore, post-market surveillance requirements ensure the continued safety and performance of devices while on the market. Having this insight can inform the development of AI that’s designed, developed, tested, and validated with at least the same rigor as the devices their customers are likely already using. 

presented by

Developing a solid working relationship with the FDA is also key. Bringing in a regulatory consultant who knows how to navigate the process is a great way to jumpstart this relationship. The value of this is two-fold, as the company gains valuable insights, and the regulators receive submissions that meet their exact specifications. This is particularly beneficial to the FDA, as they face a deluge of AI solutions coming into the market. 

Bolstering regulatory quality with change management

Once a company commits to the regulatory process, the success of deploying a clinical AI solution then depends on the human change management that goes alongside to ensure that clinicians adopt the solution in their daily workflow. Part of the regulatory process involves testing the solution in real-world settings and, ideally, incorporating clinicians’ feedback. This is not something that should end once a solution is cleared, healthcare organizations must continue working with AI developers to understand how to implement the tool in a practical way. Be mindful of the individual clinician’s perspective to ensure their lives are made better by the solution and that patient safety and outcomes will be improved too. 

Perhaps the most important message to convey during implementation is that the solution is not there to replace the clinician, rather it’s meant to augment and allow the clinician to practice at the top of their license. Emphasize the value-add – it’s not just another piece of technology that gets in the way and hinders clinicians’ ability, but rather that it’s improving their management of patients. The true opportunity with AI is that it enables clinicians to get back to doing the things that they were trained to do – and that they enjoy doing. AI can handle the repetitive, prescriptive tasks that bog clinicians down, leaving them with more time focused on direct patient care. This concept is at the core of why they became clinicians in the first place.

Updating regulatory standards to promote patient safety

It’s time to enhance the current regulatory framework and adapt it to contemporary approaches. Regulating AI should be viewed as a spectrum. Solutions that address back-office manual processes certainly need to have oversight and constraints on how they’re marketed, but their level of risk differs from clinically oriented solutions used alongside clinicians. Clinical and other forms of AI that are deemed more consequential require the appropriate protections to ensure patient safety and care quality are not harmed in the process. Regulatory bodies like the FDA have limited bandwidth, so a tiered approach helps to triage and prioritize the review of AI that carries greater risk.

Regulating these solutions ensures that they are deployed with a strong regard for patient safety and the Hippocratic Oath’s ‘do no harm’ mantra is maintained. Ultimately, perseverance is the key to optimizing care quality. These processes don’t happen overnight, they require significant investment and patience. To leverage AI in clinical settings, healthcare organizations need to be committed to this for the long term.

Photo: Carol Yepes, getty Images

Paul Roscoe is the CEO of CLEW Medical, which offers the first FDA-cleared, AI-based clinical predictive models for high-acuity care. Prior to CLEW, Paul was CEO of Trinda Health and was responsible for establishing the company as the industry leader in quality-oriented clinical documentation solutions. Before this, Paul was CEO and Co-Founder of Docent Health, after serving as CEO of Crimson, an Advisory Board Company. Paul also held executive roles at Microsoft’s Healthcare Solutions Group, VisionWare (acquired by Civica), and Sybase (acquired by SAP). Throughout his career, Paul has established an exemplary record of building and scaling organizations that deliver significant value to healthcare customers worldwide.

This post appears through the MedCity Influencers program. Anyone can publish their perspective on business and innovation in healthcare on MedCity News through MedCity Influencers. Click here to find out how.