AI Is Reshaping Healthcare—But It’s Also Rewriting the Cyber Risk Equation

Updated on January 4, 2026

Artificial intelligence is changing the way hospitals work, sometimes in ways that feel almost effortless. Tools that listen quietly to clinical conversations and draft patient notes, algorithms that flag strokes before a radiologist opens the scan, systems that summarize charts or assist nurses during intake—all of these are becoming common in modern clinical environments. And for healthcare professionals facing relentless documentation pressure, staffing shortages, and rising patient acuity, the technology can feel more like oxygen than innovation.

But alongside these gains is a quieter story that healthcare can’t ignore: AI systems are introducing new cybersecurity risks, some obvious, some subtle, and some entirely unprecedented. Hospitals have spent years hardening their networks, segmenting clinical systems, tightening access controls, and refining their business continuity and incident response plans. Now they are discovering that AI tools (especially those that run in the cloud) rely on large language models (LLMs), or sit between clinicians and the medical record, introducing a different kind of exposure. The technology is advancing faster than the guardrails meant to protect it.

New forms of cyber exposure in clinical settings 

Consider the AI documentation tools now sweeping through exam rooms. One example listens to patients and clinicians speak, transmits the conversation to cloud systems, and uses large language models to generate structured notes. From a patient-care perspective, the appeal is obvious. But from a cybersecurity standpoint, these tools combine several of the most challenging aspects of healthcare: continuous audio capture, real-time transmission of highly sensitive PHI, reliance on third-party cloud environments, and the possibility that even a minor misconfiguration could expose private conversations. A vulnerability in one of these systems wouldn’t merely leak names and dates of birth, it could expose the most intimate parts of the patient-clinician relationship.

Radiology AI platforms raise different, but equally serious, risks. Some radiology AI platforms run in the background, scanning imaging studies for signs of stroke or hemorrhage and pushing time-critical alerts to specialists. These systems depend on continuous data flows between PACS, EHRs, vendor clouds, and mobile devices. If attackers disrupt those data flows, compromise a vendor’s cloud infrastructure, or interfere with alert delivery, the result is not just a security event, it is a clinical one. When a stroke alert arrives late, or not at all, the consequences are measured in lost treatment windows, not lost records.

Even AI designed to help hospitals manage daily operations introduces new risks. Large language models integrated into patient-facing portals, triage chatbots, or scheduling systems can become attack surfaces for prompt-injection or data-exfiltration techniques that did not exist ten years ago. A cleverly crafted patient message or chatbot input can manipulate an LLM into revealing data it shouldn’t process, altering workflows, or exposing backend logic. These vulnerabilities do not fit neatly into the classic cybersecurity categories—phishing, ransomware, credential theft—and many security teams are still figuring out how to test for them.

Unique supply chain challenges with AI

Part of the challenge is that AI systems are built differently from traditional software. They learn from data. They adjust behavior through model updates. They often rely on subcontractors, hosting partners, and upstream model providers that operate outside the hospital’s direct visibility. A hospital might have a contract with Vendor A, but the AI model might run on infrastructure belonging to Vendors B and C, using training data processed by Vendor D. Each of those parties represents another link in the supply chain, and any one of them can introduce risk.

Evolving Regulatory Landscape

As the scope of risk expands, regulators are beginning to take notice. The Office for Civil Rights at HHS has already warned hospitals that sending PHI to third-party tracking tools without proper controls violates HIPAA. The same logic applies to AI workflows that transmit audio, imaging, or unstructured text to external environments. ONC’s HTI-1 rule takes the next step, requiring algorithmic transparency from developers of certified health IT, which is a polite way of saying: “Hospitals deserve to know what the model does, how it works, and where the data goes.

And because many diagnostic AI tools qualify as software as a medical device (SaMD), the FDA continues to evaluate them for safety, reliability, and post-market performance. What used to be a simple IT procurement conversation is now part of a broader health-tech regulatory ecosystem that expects evidence, transparency, and security maturity.

A new cybersecurity playbook for a new kind of system

Cybersecurity teams are adapting, but the pace is uneven. Many have strong controls around EHRs, network segmentation, identity management, and ransomware defense, but far fewer have experience threat-modeling large language models or validating whether an algorithmic update introduces new vulnerabilities. The traditional security playbook assumes static behavior—systems that behave the same way today as they did yesterday. AI breaks that assumption. A single model update can alter the system’s outputs, dependencies, or data needs, all without an obvious change to the user interface.

Even the familiar concepts in HIPAA are harder to apply. The “minimum necessary” standard becomes difficult when an AI tool performs best with broad, unfiltered data access. Risk assessments must now account for conversational audio, complex imaging flows, and generative inferences—not just structured data fields. Vendor management should extend beyond contracts and questionnaires to ongoing monitoring of model updates, subprocessors, and cloud security posture.

Additionally, there is the problem of hallucination and bias. A model that fabricates a clinical detail or misinterprets a symptom isn’t just inaccurate—it creates a cybersecurity question. If a model can be manipulated through crafted inputs, or if malicious actors can cause it to generate unsafe or misleading outputs, that becomes an avenue for exploitation. Bias introduces a related risk: models that behave differently for different patient groups can create not only ethical concerns, but also regulatory and liability concerns when disparities are tied to algorithmic behavior.

Navigating AI’s cybersecurity challenges

Hospitals don’t need to fear AI. They simply need to recognize that AI changes the cybersecurity landscape. It collapses old boundaries between IT, compliance, clinical governance, and vendor risk. It introduces new failure modes, some accidental, some adversarial. It embeds sensitive data in areas of healthcare that have not historically been monitored with forensic rigor. And it demands a different level of visibility into systems that learn, update, and adapt in ways that traditional software never did.

One key approach for hospitals to navigate this landscape well is to treat AI as a living system rather than a static tool. It’s important to ask where the data travels, who touches it, how the model changes over time, and what protections exist when something goes wrong. They should loop cybersecurity professionals into the earliest stages of procurement and implementation. Another best practice is to test model outputs in the same manner as testing disaster recovery systems. Hospitals should approach AI with both enthusiasm and healthy skepticism.

AI can help make healthcare safer, faster, and more humane. But that future depends on a clear-eyed understanding of the cybersecurity risks it introduces. Those risks don’t mean “slow down,” they mean “look closely.” The opportunities are real, but so are the vulnerabilities. Hospitals that address both with equal seriousness will be the ones that likely benefit most from what AI can offer.

Jeffrey Bernstein
Jeffrey Bernstein
Director, Risk Advisory Services at Kaufman Rossin

Jeffrey Bernstein is a director in Kaufman Rossin's Risk Advisory Services practice. His focus includes security incident and event planning and response, cybersecurity strategy, governance, compliance, training, and intelligence.

Jeff has extensive experience providing cybersecurity response, governance, risk and compliance services for private clients and family offices, as well as businesses in regulated industries, including healthcare organizations, financial institutions, investment firms, and law firms. He has over 20 years of experience advising clients on the security and compliance of their networks, applications, systems, people, property and information.

Jeff is a recognized thought leader who has contributed to industry guidance, authored articles, written white papers, and been quoted by TV and print media. He is a contributor to the National Institute of Standards and Technology (NIST) SP 800-46 Rev. 3: Guide to Enterprise Telework Security and NIST SP1800-1: Security Electronic Health Records on Mobile Devices. His articles have been featured in Scotsman Guide, USA Today/Gannett publications, The Straits Times, and other outlets. He regularly presents at conferences and has lectured master’s studies classes at New York University.