MedCity Influencers, Artificial Intelligence

The need for ethical data models for comprehensive data sharing

It is imperative for health systems to form data sharing partnerships that allow for the analysis of de-identified clinical data, while effectively and comprehensively protecting patient information.

Laptop and download file

There is growing recognition that the analysis of patient data can bring powerful improvements in patient care and the development of new medicines. At the same time, there is acknowledgement that patient data sharing must be coupled with transparency and an ethical approach.

So it becomes imperative for health systems to form data sharing partnerships that allow for the analysis of de-identified clinical data, while effectively and comprehensively protecting patient information. In the past, those two goals were often seen as mutually exclusive, but new emerging ethical partnership models meet both aspirations. In the US, there is rightfully increased scrutiny around big tech’s usage of data, especially medical data. So for examples of ethical data models that emphasize transparency and patient protections, it can be helpful to look abroad.

The U.S. has generally lagged its European counterparts in protecting consumer (including patient) data and privacy. In healthcare, the nearly three decades old HIPAA statute, passed under the Clinton administration, remains the primary protector of patient privacy rights. As Europe has leaned into privacy and consumer protection in recent years, ethical data sharing models have been more thoroughly explored and have received more attention and investment. As a result, best practices for ethical data management have become much better understood, and some very successful approaches have emerged.

One U.K. model was born from the challenge of trying to find an ethical, transparent model for analyzing data from the UK’s nationalized NHS health system to support life sciences research. NHS hospitals, which operate fairly autonomously within the national system, wanted to contribute their patient data to the cause of advancing potentially life-saving research, but were concerned about the potential of such collaboration to expose protected patient information. Many of these hospitals embraced a partnership model that not only added more layers of privacy protection but increased the research value of their data while allowing them to retain control over how that data is used. They have a voice in how, why, when, and where the data are analyzed.

Sophisticated clinical artificial intelligence (AI) is a critical enabling technology for successful ethical data sharing models. It supports deriving high-value insights and providing rapid responses to research questions without sharing underlying data. This technology allows health systems to meet their ethical duty to protect patient rights, while still effectively supporting the vital medical research that leads to new discoveries, cures, and therapies.

Clinical AI may ultimately do more than just enable the kinds of ethical data sharing frameworks US health systems prefer. It may also be a more effective model for driving medical innovation. For one thing, these models create much more active partnerships between providers and researchers, which can have profound effects on how the research community thinks about disease and treating patients. For example, a health system nephrologist developing clinical algorithms to better predict chronic kidney disease could share de-identified data plus clinical pathway insights with a life sciences company developing a therapeutic to slow the progression of CKD. Or, a physician leader working to identify novel biomarkers for heart failure could combine research efforts with a biotech firm also working on targeted medicines for cohorts of patients with CHF.

Ethical data sharing models also can be used by the providers themselves to get better, faster access to the insights derived from their data, and to help directly inform and guide patient care.

The right approach can build trust across the entire system – for health systems, the patients they serve, and the life sciences companies they collaborate with. As this trend continues to accelerate, attention to ethical models and the transparent adoption and execution of those approaches will become the norm.

Photo: eichinger julien, Getty Images

Derek Baird is President North America at Sensyne Health and Commercial Director, SENSE. Derek is responsible for building Sensyne Health’s U.S. operations and setting the global strategy for the SENSE clinical AI platform. Previously, Derek was SVP of Growth for AVIA where he led the development of a national network of 50+ health systems and payers seeking to digitally transform their organizations. Prior to AVIA, Derek was an SVP for Health Language, guiding the company through explosive growth and its 2013 sale to Wolters Kluwer Health. He has also held marketing and product management leadership positions at Zynx Health and Practice Partner (acquired by McKesson). Derek is an investor in, and advisor to, several digital health companies.

He holds an MBA from the UCLA Anderson School of Management and a BA in Business Administration from the University of Washington.

This post appears through the MedCity Influencers program. Anyone can publish their perspective on business and innovation in healthcare on MedCity News through MedCity Influencers. Click here to find out how.