For years, administrative burdens, tedious tasks and compliance complexities have slowed our industry down and compromised the human elements of healthcare. Now, artificial intelligence (AI) is poised to enable better, faster, more personal care delivery. But, as with any new technology, healthcare organizations should vet potential risks as vigorously as they assess the benefits.
Case in point: The same AI that can identify health patterns and predict claim denials also creates an expanded attack surface for cybercriminals. To protect patients, their data and your systems, organizations must evolve their security and privacy frameworks to keep pace with the shifting threat landscape.
Examining evolving risks
So, what does this new cybersecurity reality look like? Phishing and social engineering campaigns, for instance, are now hyper-personalized. Bad actors are leveraging AI to synthesize stolen protected health information (PHI) with personal and insurance data to create highly convincing and urgent messages that bypass traditional detection methods. These AI-crafted messages often lack the typical red flags, such as grammatical errors, making them more effective.
Ransomware operations have also become more efficient. AI is now used to map hospital networks, identify high-value data assets and execute attacks with unprecedented speed, turning a potential vulnerability into a system-wide breach in minutes.
Agentic AI in particular highlights how well-intentioned technology adoption can quickly become detrimental without proper guardrails in place. These autonomous systems can make decisions and execute complex tasks across multiple platforms on behalf of users. This introduces a new vector for threats, where a single compromised AI model can create vulnerabilities across dozens of interconnected systems like a ripple effect.
Adapting security strategies
Cybersecurity has always been a cat-and-mouse game. Security professionals shore up defenses of their systems, malicious actors test new workaround strategies and the cycle continues. That’s why we must operate under the assumption that for every AI tool we leverage to improve healthcare, adversaries will weaponize similar technology to attack it.
Prevention alone is not a sufficient strategy as AI enables greater speed, scale and sophistication of cyberattacks. Organizations must implement active monitoring for rapid detection and containment as well as immutable backups to ensure they can restore critical operations without giving in to ransom demands.
Additionally, traditional security architectures of hospitals were not designed for autonomous systems operating with human-level privileges. To secure this new environment, a “zero trust” framework should be the default standard. This means moving beyond perimeter-based security and compliance checklists. Every request for access, regardless of its origin, must be verified. Similarly, network segmentation can help prevent a single breach from compromising an entire system.
For agentic AI in particular, organizations should enable only the minimum necessary privileges and access. Monitoring, segregation of duties and authentication (i.e., tethering agents to a human user) can also help minimize the damage a hijacked agent could do.
These are the kinds of security controls that healthcare leaders need to prioritize when integrating AI into their systems. The allure of “free” tools should be met with extreme caution as they often come at the hidden cost of surrendering control over your data. Every AI vendor and platform must undergo rigorous vetting, with explicit contractual clarity on data ownership, model governance and override capabilities.
Protecting what matters most
For healthcare organizations, AI’s strengths can quickly turn into points of weakness. But with the right balance of innovation and protection, hospitals—and the systems they rely on—can become resilient. Because in healthcare, safety is everything.

Tim O’Brien
Tim O’Brien is Vice President of Cloud Growth for Altera Digital Health.






