AI in Healthcare: Strategies for Maintaining HIPAA Compliance

Updated on July 31, 2024

Photo 132528154 | Ai In © Wrightstudio | Dreamstime.com

The topic of AI in healthcare is often an exciting one, representing improved efficiency and a better patient experience. However, for chief information security officers (CISOs) in the healthcare sector, unsecured AI poses a serious security risk that can threaten HIPAA compliance. When AI systems lack robust security measures, they become vulnerable to cyberattacks and data breaches.

The threat is real and getting more real by the day. In a recent survey, 71 percent of healthcare IT professionals reported that their organizations had experienced at least one data breach since 2022. What’s more, in 2023, HIPAA fines more than doubled over the previous year. These breaches often occur because AI tools are not properly vetted or secured, leading to unauthorized access to sensitive health information.

There’s no question that healthcare companies are facing serious challenges with data security and privacy. So far, AI might just be more of a liability than a help. Can healthcare organizations actually stay compliant while also taking advantage of AI?

The problem of AI in healthcare

AI has increased cyber risk in healthcare in three main ways. The first comes in the form of “shadow AI.” Shadow AI refers to artificial intelligence systems or tools that staff members implement and use without the explicit approval, knowledge, or oversight of the IT department. In a way, it’s understandable. Employees adopt AI tools to address immediate business needs, bypassing formal IT channels due to hurdles in the official approval processes. 

Even cybersecurity teams aren’t immune: Almost three in four cybersec professionals report that they have used unsanctioned technology or AI tools in the past year. In short, if your IT team isn’t aware of the tools that workers are using, that means you have no visibility into the potential cyber risks or vulnerabilities of those tools. 

The second factor that has increased the risk of AI comes from professionals sharing health-related information with general AI tools, like ChatGPT, that aren’t optimized for healthcare applications. Some healthcare professionals might use ChatGPT to quickly analyze data, to draft reports, or to answer medical questions. But ChatGPT is not HIPAA compliant, as noted in a 2023 article in the HIPAA Journal. That means healthcare professionals really shouldn’t be sharing protected health information (PHI) with these types of tools. 

The final risk factor is that many healthcare companies haven’t performed an organization-wide AI risk assessment. This is mainly because a lot of companies are relatively new to using AI in daily business processes, and they’ve yet to adjust their risk and cybersecurity plans accordingly. All of these factors combine to create a serious problem with HIPAA compliance for healthcare companies. 

Healthcare’s response to AI

Given the possibility of heavy fines for organizations and potentially both fines and criminal charges for individuals who violate HIPAA, it seems clear that CISOs can’t simply stand idle while AI puts their companies at risk. In my interactions with CISOs, I’ve noticed a significant increase in awareness around AI-based vulnerabilities, in addition to increased training and funding for HIPAA compliance. Still, there’s a lot more work to be done. Here are some things CISOs can do to address these concerns. 

First, organizations need to prevent shadow AI adoption by implementing software controls on company devices. Antivirus, anti-malware, and endpoint detection and response (EDR) tools will help you monitor and protect devices from unauthorized AI software installations. You can also implement strict role-based access control (RBAC) software to prevent anyone below a certain level within the company from downloading any apps, apart from pre-approved tools. I would also recommend that you develop a whitelist of approved tools and simplify your application approval process as much as possible so workers are less likely to turn to shadow AI. 

Second, though you may not be able to prevent every instance of shadow AI, you can educate your end users on acceptable vs. unacceptable use. Banning certain AI tools might seem like the easier answer, but a ban without adequate enforcement may just increase the usage of shadow AI. Employees often continue to use shadow AI because these tools help them perform their tasks more efficiently or effectively, and they might perceive the formal IT approval process as too slow or cumbersome to meet their immediate needs.

I would recommend that, rather than banning AI tools like ChatGPT, companies have policies and training in place to allow health workers to use these technologies to increase efficiency while still maintaining patient safety and privacy. For instance, employees can use ChatGPT in a HIPAA-compliant manner by de-identifying protected health information. 

According to HIPAA guidelines, this can be done by removing specific personal identifiers like names, geographic information more specific than a state residency, all elements of dates (except year) directly related to an individual, phone numbers, and Social Security numbers. Alternatively, you can pre-approve AI tools that specifically focus on HIPAA compliance, like BastionGPT or CompliantGPT. 

Finally, part of your risk assessment for HIPAA compliance and cybersecurity should involve an acknowledgment of AI tools, and also take into account the effectiveness – or lack thereof – of software controls and training that currently exist in your organization. Create risk reports that you can share with upper management so you can start taking measures to better protect yourself, like encrypting all PHI and making sure health data is only shared with authorized applications. 

Final thoughts

AI in healthcare isn’t going anywhere, and companies will continue to face vulnerabilities and increased risk. However, with the right measures, I believe it’s possible to largely overcome the recent cybersecurity struggles in the industry that may be caused by widespread AI adoption. The potential benefits of AI in healthcare are significant, including improved diagnostic accuracy, personalized treatment plans, enhanced patient monitoring, and streamlined administrative processes. 

By implementing robust security measures and fostering a culture of compliance, healthcare organizations can harness the power of AI to deliver better patient outcomes while maintaining data security and HIPAA compliance.

Laika Dec 8 LowRes 0594 copy
Austin Newcomer
Manager at 

Austin is Manager, InfoSec Assurance at Thoropass with a focus in healthcare compliance. Austin attended James Madison University and graduated with a B.B.A in Computer Information Systems. He’s passionate about cybersecurity and cloud security. Austin has led SSAE 18 SOC 1 and 2 assessments of complex environments, which include assessing security procedures, reviewing security configurations, interviewing control owners, and documenting results. He’s led, planned, and executed combined assessments with compliance frameworks (e.g PCI, HIPAA, HITRUST, FEDRAMP, ISO, etc.)