MedCity Influencers, Health IT

Why Generative AI Threatens Hospital Cybersecurity — and How Digital Identity Can Be One of Its Greatest Defenses

While more generative AI tools are becoming available in healthcare for diagnostics and patient communication, it is important for clinicians and healthcare staff to be aware of the security, privacy, and compliance risks when entering protected health information (PHI) into a tool like ChatGPT.

Healthcare organizations are one of the biggest targets of cyber attacks. A survey we conducted found that more than half of healthcare IT leaders report that their organization has faced a cybersecurity incident in 2021. Hospitals face legal, ethical, financial, and reputational ramifications during a cyber incident. Cyberattacks can also lead to increased patient mortality rates, delayed procedures and tests, and longer patient stays, posing a direct threat to patient safety.

The rise of AI and tools like ChatGPT has only made these risks greater. For one, the assistance of AI will likely increase the frequency of cyberattacks by lowering the barriers to entry for malicious actors. Phishing attacks may also become more frequent and deceptively realistic with the use of generative AI. But perhaps the most concerning way generative AI could negatively impact healthcare organizations is through the improper use of these tools when providing patient care.

While more generative AI tools are becoming available in healthcare for diagnostics and patient communication, it is important for clinicians and healthcare staff to be aware of the security, privacy, and compliance risks when entering protected health information (PHI) into a tool like ChatGPT.

ChatGPT can lead to HIPAA violations and PHI breaches

Without the proper education and training on generative AI, a clinician using ChatGPT to complete documentation can unknowingly upload private patient information onto the internet, even if they’re using ChatGPT to complete the most innocuous of tasks. Even if they are just using the tool to summarize a patient’s condition or consolidate notes, the information they share with ChatGPT is saved into its database the moment it’s entered. This means that not only can internal reviewers or developers potentially see that information, but it may also end up explicitly incorporated into a response ChatGPT provides to a query down the line. And if that information includes seemingly harmless additions like nicknames, dates of birth, or admission or discharge dates, it’s a violation of HIPAA.

ChatGPT and other large generative AI tools can certainly be useful, but the widespread ramifications of irresponsible use can risk incredible damage to both hospitals and patients alike.

presented by

Generative AI is building more convincing phishing and ransomware attacks

While it’s not foolproof, ChatGPT churns out well-rounded responses with remarkable speed and hardly ever makes typos. In the hands of cyber criminals, we’re seeing less of the spelling errors, grammar issues and suspicious wording that usually give phishing attempts away, and more traps that are harder to detect because they look and read as official correspondence.

Writing convincing deceptive messages isn’t the only task cyber attackers use ChatGPT for. The tool can also be prompted to build mutating malicious code and ransomware by individuals who know how to circumvent its content filters. It’s difficult to detect and surprisingly easy to pull off. Ransomware is particularly dangerous to healthcare organizations as these attacks typically force IT staff to shut down entire computer systems to stop the spread of the attack. When this happens, doctors and other healthcare professionals must go without crucial tools and shift back to using paper records, resulting in delayed or insufficient care which can be life-threatening. Since the start of 2023, 15 healthcare systems operating 29 hospitals have been targeted by a ransomware incident, with data stolen from 12 of the 15 healthcare organizations affected.

This is a serious threat that requires serious cybersecurity solutions. And generative AI isn’t going anywhere — it’s only picking up speed. It is imperative that hospitals lay thorough groundwork to prevent these tools from giving bad actors a leg up.

Maximizing digital identity to combat threats of generative AI

As generative AI and ChatGPT remain a hot topic in cybersecurity, it may be easy to overlook the power that traditional AI, machine learning (ML) technologies, and digital identity solutions can bring to healthcare organizations. Digital identity tools like single sign-on, identity governance, and access intelligence can help save clinicians an average of 168 hours a week, time otherwise spent on inefficient and time-consuming manual procedures that tax limited security budgets and hospital IT staff. By modernizing and automating procedures with traditional AI and ML solutions, hospitals can strengthen their defenses against the growing rate of cyber attacks, which have doubled since 2016.

Traditional AI and ML solutions come together with digital identity technology to help healthcare organizations monitor, identify, and remediate privacy violations or cybersecurity incidents. By leveraging identity and access management technologies like single sign-on with the capabilities of AI and ML, organizations can have better visibility over all access and activity in the environment. What’s more, AI and ML solutions can identify and alert any suspicious or anomalous behavior based on user activity and access trends, helping  hospitals to remediate potential privacy violations or cybersecurity incidents sooner. One especially useful tool is the audit trail, which maintains a systematic, detailed record of all data access in a hospital’s applications. AI-enabled audit trails can offer a tremendous amount of proactive and reactive data security from even the most skilled cybercriminals. Suspicious activity, when detected, can be immediately addressed, preventing the exploitation of sensitive data and the accelerated deterioration of cybersecurity infrastructure. Where traditional systems and manual processes may struggle to analyze large amounts of data, learn from past patterns, and engage in “decision making,” AI excels.

Ultimately, healthcare organizations face many competing cybersecurity objectives and threats. Utilizing digital identity tools to reduce risk and increase efficiency is crucial, as is developing proactive educational initiatives to ensure clinicians understand the risks and benefits of using generative AI so they don’t accidentally compromise sensitive information. While generative AI tools like ChatGPT hold a lot of potential to transform clinical experiences, these tools also signify that the risk landscape has expanded. We have yet to see all of the ways generative AI will impact the healthcare industry, which is why it’s vital that healthcare organizations keep networks and data safeguarded with secure and efficient digital identity tools that also streamline clinician work and improve patient care.

It’s safe to say we haven’t met every threat AI will pose to the healthcare industry — but with vigilance and the proper technology, hospitals can elevate their cybersecurity strategy against the ever-evolving risk landscape.

Photo: roshi11, Getty Images

Joel Burleson-Davis is the SVP of Worldwide Engineering, Cyber at Imprivata where he’s responsible for building, delivering, and evolving the suite of Imprivata’s cybersecurity products that include Privileged Access Management, Privacy Monitoring, and Identity Governance solutions. Prior to joining Imprivata, Joel was Chief Technical Officer at SecureLink, the leader in critical access management for organizations in need of advanced solutions to secure access to their most valuable assets, including networks, systems, and data. While at SecureLink, Joel was responsible for the overall technology and operational strategy and execution including direction and oversight for Product Development, Quality Assurance, IT and Cybersecurity Operations, Compliance, and Customer Success.

Before SecureLink, Joel held Systems Engineering, IT Consulting, and Instructor positions while serving as one of the founding members of The Linux Foundation certification committee, a global committee of key Linux subject matter experts.

Topics