MedCity Influencers, Health Tech, Legal

Effective governance: Charting the future of responsible healthcare AI innovation

The power and promise of artificial intelligence comes in a Pandora’s box full of novel and multi-dimensional enterprise risks—legal, regulatory, financial, operational, ethical and reputational—that go above and beyond those associated with other technology innovations.

The power and promise of healthcare artificial intelligence (AI) in virtually every clinical dimension seems limitless. From preventing and detecting illnesses and improving diagnostic accuracy, to facilitating treatment planning, accelerating research and discovery, enhancing patient engagement and experience, streamlining and automating clinical and administrative workstreams to optimize workforce management and reduce provider burnout, and empowering public health surveillance and population health management—these are but a few examples of AI’s potential for healthcare space.

However, this power and promise comes in a Pandora’s box full of novel and multi-dimensional enterprise risks—legal, regulatory, financial, operational, ethical and reputational—that go above and beyond those associated with other technology innovations. Many AI solutions, particularly dynamic and autonomous machine-learning algorithms, remain in the development and testing stages and have an as-yet unproven track record of safety or measurable return on investment. The inherent technological certainty is exacerbated by the lack of an adequate healthcare legal and regulatory scheme for addressing how to responsibly manage AI’s unique liability and compliance challenges. Regulators will have difficulty adapting  the current legal and regulatory framework to the pace of AI innovation.

The governance imperative

Healthcare boards simply do not have the luxury of waiting for greater certainty in AI technology or the law. They must act now to position their organizations to maximize AI’s potential for transforming healthcare and be prepared to face and manage these risks head-on, at the front-end of any AI innovation initiative and throughout the AI’s life cycle. For the foreseeable future, this will require an unusually active level of board engagement.

The most important step that a board can take now is to create a disciplined yet flexible framework for exercising governance oversight. The sooner boards take this foundational step, the better they will position themselves and their organizations to make well-informed and prudent decisions in the effort to harness the transformative potential of healthcare AI.

Taking this step will require boards and senior leaders first to establish and maintain their “AI literacy.” While healthcare leadership need not have a deep and all-encompassing knowledge of the underlying technologies that drive AI innovation, they will need at least a high-level understanding of its broad spectrum of functionalities, sophistication and associated risks in order to make responsible decisions.

Developing a “home-grown” governance oversight framework

As noted above, the current healthcare legal and regulatory scheme is lacking the direction and focus boards and their management teams and advisors are used to having to manage compliance risks. Therefore, while a board’s existing corporate compliance program will provide a good starting point, leadership will need to turn to other resources for constructing a “home-grown” governance oversight framework tailored to its organization’s particular needs. Such a framework will enable board to effectively manage AI technology’s new and different liability and compliance challenges.

For example, various domestic and international regulatory agencies, including the Food and Drug Administration (FDA), Office of the President of the United States, Office of Management and Budget, Department of Human Services and World Health Organization,  have published guidance addressing laws, policies and ethical principles that provide useful resources for boards.

Recognizing the unique nature and transformative potential of AI technology, the FDA has been blazing new regulatory trails through its efforts to adapt its own medical device regulatory scheme to the unique nature and rapid pace of AI and machine-learning (ML) technology innovation. In January 2021, for example, the FDA released the Artificial Intelligence and Machine Learning (AI/ML) Software as a Medical Device Action Plan, setting forth an oversight framework that seeks to balance the essence of AI’s “ability to learn from real-world use and experience, and its capability to improve its performance” with the importance of ensuring that such solutions “will deliver safe and effective software functionality that improves the quality of care that patients receive.” The FDA approach offers different approaches to the regulatory oversight of AI technologies based on the intended use of the AI solution (e.g., patient care and research versus healthcare operations) and where a particular AI solution falls on the safety-risk spectrum.

In October 2021, the FDA joined forces with Health Canada and the United Kingdom’s Medicines and Healthcare Products Regulatory Agency (MHRA) to identify 10 guiding principles as a foundation for the development of safe, effective and high-quality AI/ML-enabled medical devices.

In December 2021, the FTC issued an Advance Notice of Proposed Rulemaking to initiate its consideration of rulemaking on privacy and artificial intelligence in order to “curb lax security practices, limit privacy abuses, and ensure that algorithmic decision-making does not result in unlawful discrimination”.  The FTC’s 2020 and 2021 guidelines emphasize the importance of transparency and explainability and fairness in the use of AI in the consumer context.

In April 2021, the European Commission unveiled its long-awaited proposal, Laying down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative Acts. Like the FDA, the Commission takes a risk-based approach to balancing the benefits and the risks of AI systems throughout their entire life cycle.

The U.K. just announced its plans to pilot a new AI Standards Hub for shaping global AI standards. The Alan Turing Institute will lead the pilot, with the support British Standards Institution and National Physical Laboratory. The roles of the Hub will include improving governance of AI, complementing pro-innovation regulation, and unlocking the huge economic potential of these technologies to boost investment and employment now the UK has left the European Union.

The World Health Organization’s publication, Ethics and Governance of Artificial Intelligence for Health, provides a broad overview of laws, policies and ethical principles boards can draw on when considering AI applications for the delivery of healthcare services, research development and systems management. It has also recommended a governance framework focusing on issues such as consent, data protection and sharing, specific private-sector and public-sector interests, and the development of policy and legislation.

Other guidance and resources are available from other international governmental bodies, organizations with a track record of successful AI innovation and development, as well as from industry watchdogs, trade associations, standards-setting organizations, private sector collaborations and public-private partnerships.

Finally, internal and external legal counsel together bring a wealth of expertise and experience concerning the complex healthcare regulatory schemes that will be invaluable for navigating the complexities and uncertainties presented by the evolving AI regulatory framework while achieving the appropriate balance between opportunities and risks.

Key ingredients for an effective “home-grown” governance oversight framework

Recent AI innovation guidance and real-world experience strongly suggest that an effective healthcare AI innovation governance oversight framework should:

  • Align with the organization’s compliance and enterprise-risk programs and associated risk tolerances as well as its enterprise-wide and technology-innovation strategic plans.
  • Foster patient and provider trust in the AI technology.
  • Include decision-making standards, processes and protocols, with a particular focus on transparency and clear lines of responsibility and accountability.
  • Articulate criteria and associated risk tolerances for selecting AI innovation opportunities and partners for which there is an acceptable balance between opportunity and risk.
  • Establish principles and criteria for maintaining the integrity, privacy, security and non-discriminatory nature of the data supporting the AI development and deployment.
  • Address ethical and social responsibility considerations (e.g., discrimination, healthcare disparities).
  • Provide for ongoing monitoring and testing of enterprise-risk management and return on investment throughout the lifecycle of the AI.
  • Establish clear, yet nimble, pathways for adapting to and managing the often unknown and unforeseen changes, challenges, opportunities, risks, liabilities, standards and requirements that will arise as AI continues to evolve.

The future of healthcare AI is as promising as it is precarious. Responsibly managing the potential benefits and risks is of paramount importance throughout an AI system’s entire life cycle. While the regulatory scheme is evolving at a slower pace than the technology itself, various domestic and international governmental bodies have issued helpful guidance and are making meaningful progress toward formal rulemaking. Various other sources have made guidance for navigating this difficult terrain. While there is no prescribed pathway or other one-size-fits all solution, Board can draw meaningful direction from these public and private sector resources to develop their own framework for responsible and successful management of AI’s benefits and risks. There is no time like the present for healthcare boards to do so.

Photo: metamorworks, Getty Images

Bernadette Broccolo counsels the full range of health and life science industry stakeholders on the transactional, strategic and regulatory dimensions of complex strategies for harnessing and deploying big data assets, artificial Intelligence and other digital health innovation discovery, commercialization and deployment; streamlining biomedical research and precision medicine; and generating alternative revenue streams. She regularly negotiates agreements for a wide spectrum of digital health innovation collaborations. The areas of concentration Bernadette skillfully applies to such engagements include privacy law and other digital health innovation compliance, health information technology acquisition transactions, health information network formation, collaboration transactions, corporate governance, and human subject research compliance. Chambers USA consistently recognizes Bernadette in its top tier rankings of health lawyers both nationally and in Illinois, and she is a frequent and prominent speaker and author in her areas of expertise, including her most recent contribution as Co-Editor in Chief and Co-Author of The Law of Digital Health, published by AHLA in March 2018.

Michael W. Peregrine represents corporations (and their officers and directors) in connection with the full range of governance and fiduciary duty matters, officer-director liability issues, charitable trust law and corporate structure. Michael is recognized as one of the leading national practitioners in corporate governance law, and is a senior contributor to Forbes.com, where his articles focus on governance and leadership topics.
Michael is outside governance counsel to many prominent corporations, including hospitals and health systems, voluntary health organizations, colleges and universities, social service agencies, health insurance companies, pharmaceutical companies and financial institutions.
Michael is also often called upon to advise boards in response to external/regulatory challenges to governance. He frequently serves as special counsel in connection with confidential internal board reviews and investigations.

This post appears through the MedCity Influencers program. Anyone can publish their perspective on business and innovation in healthcare on MedCity News through MedCity Influencers. Click here to find out how.

Topics