With the expansion into healthcare by Anthropic and OpenAI, we enter a new era in AI. According to OpenAI, more than 40 million Americans use ChatGPT every day to ask questions about healthcare.
Whether it is helping healthcare organizations reduce administrative burden or enabling individuals to interpret their lab results, AI has tremendous potential to improve patients’ lives. OpenAI’s enterprise-grade AI tools have already been rolled out to institutions such as Boston’s Children Hospital, Cedars-Sinai Medical Center, and Stanford Medicine Children’s Health.
Interestingly, both Anthropic and OpenAI have also announced consumer AI health tools. A question I have been getting from curious individuals is this: If leading hospitals are using these AI tools, and the companies mention HIPAA compliance on their websites, are the consumer AI health tools also regulated by HIPAA? Do consumers share a similar relationship with these companies as healthcare organizations do?
We will explore these questions in this article and understand the distinction between enterprise-grade AI tools and consumer AI tools within healthcare. We will also dive into what privacy protections are available to individuals that use consumer AI tools.
Enterprise-grade vs. consumer-facing AI tools
Anthropic and OpenAI offer two separate product categories within healthcare:
Enterprise-grade tools: These are built for healthcare organizations such as hospitals, health systems, and health insurance companies. There is the OpenAI for healthcare suite, which is meant to help healthcare organizations implement AI workflows to scale tasks such as generating referral letters. ChatGPT for Healthcare, a product within OpenAI’s healthcare suite, is a secure AI workspace for clinicians to get answers and assistance based on ‘trusted medical evidence to support clinical decisions…’.
Similarly, Anthropic’s enterprise AI tools allow organizations to connect Claude to industry standard databases and scientific literature to reduce manual lookup times and accelerate clinical and administrative workflows.
Consumer tools: OpenAI announced ChatGPT Health (not to be confused with ChatGPT for Healthcare), which is a dedicated space within the ChatGPT user interface and is meant to help regular consumers better make sense of their health information, interpret lab results, etc.
With Anthropic, while there doesn’t seem to be a consumer AI tool explicitly named ‘Claude Health’, the website states that consumers in the US can use Claude Pro or Max to grant Claude secure access to their medical information to test results, better understand their health, etc. similar to how ChatGPT Health operates.
Why this distinction matters
The distinction matters because of the different regulatory frameworks that enterprise AI tools and consumer AI tools operate under.
Healthcare organizations (covered entities under HIPAA) purchasing enterprise-grade AI tools from Anthropic and OpenAI have the ability to negotiate a Business Associate Agreement (BAA) with these companies. Under HIPAA, signing a BAA with a healthcare organization creates certain contractual obligations for Anthropic and OpenAI. Protecting sensitive health information becomes a shared responsibility between healthcare organizations and these companies (Business Associates under HIPAA). These contractual obligations are enforced by the US Department of Health and Human Services Office for Civil Rights (OCR), which has the authority to investigate violations and impose penalties on the offenders.
By contrast, an individual that uses a consumer AI tool such as ChatGPT Health is not regulated under HIPAA as a covered entity. Because the individual is not regulated by HIPAA, the consumer AI tool is also not regulated by HIPAA when the individual shares their health information. It is likely that Anthropic and OpenAI don’t offer BAAs to consumers for the simple reason that HIPAA doesn’t apply in this relationship and a BAA would be meaningless. Instead, privacy protections for consumers come from these companies’ privacy policy and terms of service.
You may wonder what these protections are. Anthropic and OpenAI state on their websites that they don’t use health data from users to train their models. According to OpenAI, any conversations and data shared with their consumer AI tools are encrypted by default at rest and in transit. In addition, OpenAI’s website specifies that for healthcare conversations with ChatGPT Health, additional protections such as purpose-built encryption and data isolation have been implemented.
These are assurances provided by the companies, rather than regulatory obligations created by a signed agreement. Even then, these assurances create certain obligations for the companies under federal and state laws. However, such obligations are typically enforced through a combination of private litigation and consumer protection laws, rather than through HIPAA regulations.
Training data
In the previous section I noted that Anthropic and OpenAI state that they don’t use health data from users to train their models. If users’ health information is not used to train these models, where did their health-related knowledge come from?
Consumer AI models are typically trained using publicly available information, deidentified medical data from third parties, and non-health related information from users of the AI tools themselves who opted in for their chats and data to be used for training and improving the models. This is legal. Federal law does not prohibit de-identified health information from being analyzed and used to train AI models as long as the de-identification process follows certain standards. These standards are laid out in Title 45 of the Code of Federal Regulations Section 164.514 (45 CFR § 164.514).
This means that the health knowledge of these models most likely came from de-identified health data, and not health data of users of these consumer AI tools. This is an important nuance.
Things for users to remember
Consumer AI tools show a lot of promise, especially with healthcare becoming more expensive, and access being a challenge. Millions of users already use such tools to understand their health.
At the same time, consumer AI tools are NOT healthcare providers. Companies have been explicit about this. For example, the OpenAI website clearly states that ChatGPT Health “is designed to support, not replace, medical care.”
Before connecting health apps and medical records to these tools, users should understand what protections they do and don’t have.
You control what data you share: Users can connect and disconnect health apps and medical records as they please, and must take advantage of this.
A different relationship: An individual user engaging with consumer AI tools is not a HIPAA covered entity. As a result, privacy protections come from privacy policies and terms of service provided by these companies rather than a negotiated agreement like a BAA.
No enterprise-grade HIPAA compliance features: Unlike enterprise customers for whom HIPAA compliance is a must, an individual user doesn’t get access to enterprise-centric compliance features (at least at the time of writing) such as options for data residency, customer-managed encryption keys, etc. Instead, they are relying on the infrastructure provided by the company.
Remedies for disputes: If there is a dispute, remedy would fall under the consumer protection and private litigation umbrella rather than HIPAA-like regulatory enforcement.
These facts don’t make consumer AI tools inherently unsafe. Consumer AI tools sit in a different regulatory silo when compared to enterprise-grade AI tools. Understanding these differences would help consumers make informed decisions about what health information to share with these tools.
Image: Flickr user Rob Pongsajapan
Nirmal Vemanna is Principal Product Specialist, Healthcare and Life Sciences at Tealium, a Customer Data Platform company. In his current role, Nirmal is in charge of product strategy and development of data platforms and analytics tools for the healthcare and life sciences vertical.
Under Nirmal’s leadership, Tealium launched the industry’s first ever privacy-centric data orchestration platform that allows healthcare and life sciences organizations to collect, analyze, and orchestrate patient and physician data across the entire customer engagement ecosystem in real time.
Nirmal has 13 years of experience in the healthcare and life sciences industry. He has worked at industry leaders such as Pfizer, GlaxoSmithKline, Merck, and IQVIA building cutting edge data platforms and analytics tools to help in drug discovery, drug commercialization, and customer engagement.
This post appears through the MedCity Influencers program. Anyone can publish their perspective on business and innovation in healthcare on MedCity News through MedCity Influencers. Click here to find out how.
