Protecting personal health information is one of the most critical functions of healthcare technology. Not only is it a legal requirement, but it’s also foundational to establishing and maintaining trust between patient and provider. Patients need to be certain that the information in a portal or electronic health records isn’t vulnerable to data breaches.
To that end, many digital healthcare tools advertise themselves as HIPAA-compliant. That label offers patients and providers a sense of assurance that the platform offers appropriate data security. But the truth is that HIPAA compliance is the minimum level of privacy protections for healthcare data.
To truly protect patient data, healthcare systems must go beyond HIPAA and implement safeguards against all types of vulnerabilities.
The Myth of Compliance: Why the Healthcare Industry Needs To Evolve
Since its passage in 1996, HIPAA has been refined to address emerging needs. In 2003, 2009, and 2013, the rule was updated to add protections needed for electronic health records. However, no new privacy protections have been finalized since 2013, despite rapid advancements in information technology, artificial intelligence, and increases in the use of telehealth and digital portals.
As a result, being HIPAA compliant means being compliant with digital privacy standards that are more than 10 years behind modern technology. Platforms that check the box for HIPAA compliance don’t necessarily have the functionality necessary to address data vulnerabilities in the current technological environment.
Any homeowner knows that locking the door is only one step in home security; you have to close the windows as well. Likewise, to be truly secure, digital healthcare platforms need to go beyond HIPAA and offer complete privacy protections.
Unseen AI Security Risks and Regulatory Blind Spots
Because HIPAA hasn’t been updated in more than a decade, the guidelines fall short in defining how digital health platforms should address modern security threats. While AI now permeates all kinds of digital tools, there is nothing in the HIPAA text that tells providers or developers how to address AI-specific issues like algorithmic transparency, real-time data handling, or ethical decision-making.
Even simple, widely accepted functions like chatbots can be a security risk. Weaknesses in these tools can make personal or medical data entered into a chatbot vulnerable to cybersecurity risks.
In 2024, HHS initiated a process to consider updating HIPAA guidelines, but final rules could be more than a year away. In the meantime, AI becomes increasingly embedded in all systems, and with it, more potential weaknesses in patient data protection emerge.
It’s incumbent upon leaders in the health and medical technology fields to address privacy challenges ahead of the regulatory process.
Security Is Patient Care
When people share their most vulnerable thoughts and emotions with an AI tool, they need confidence that their data is protected and handled ethically. That’s why security must be embedded from the very beginning of a tool’s development.
To truly earn patient trust and improve adoption, AI-powered healthcare tools should incorporate the following foundational safeguards:
- Minimal collection of personally identifiable information (PII): Only the essential data points necessary to link a patient to their electronic medical record (EMR) should be collected. Reducing the data footprint lowers exposure risk.
- Patient autonomy and control: Users should always have the ability to decide what information they share and when. There should be no coercive prompts or hidden data capture mechanisms.
- Clear transparency and education: Patients need to understand how their data is used. This includes disclosures about data handling practices and guidance on how to safely interact with the AI.
- Built-in crisis safeguards: Smart systems should be able to detect signs of mental health deterioration and flag them for clinician review or guide users to emergency services when appropriate.
- Evidence-based training: To avoid misinformation or harmful “hallucinations,” AI tools must be trained on a diverse, reputable body of medical knowledge and benchmarked against trusted clinical sources.
Perhaps most importantly, these tools should be purpose-built for healthcare, with clinical oversight throughout the design and development process. Systems developed purely from a tech-first perspective often overlook critical nuances of patient care, privacy, and ethical decision-making.
Designing for safety from the ground up ensures that patients and clinicians alike can engage with confidence.
Rethinking What Security Really Means
In a space as intimate as mental health, trust is everything, and building that trust takes thoughtful, patient-centric security. Data security should be felt, not just seen on a compliance checklist. That means AI-powered applications should go beyond HIPAA compliance to create an environment where patients feel safe enough to open up.
The most effective digital mental health tools embed security seamlessly into every interaction without adding friction to the patient experience. When security is intuitive, respectful, and transparent, patients are more likely to engage meaningfully and consistently.
It’s time for the healthcare industry to redefine digital health security through the lens of patient experience and clinical relevance. Protecting data is protecting people, and trust is the outcome of systems designed with empathy, ethics, and patient well-being at the core.

Ali Allage, CISO, CTO
Ali Allage is a serial entrepreneur serving as the Chief Technology Officer and Chief Information Security Officer at HoloMD, As CTO and CISO, Ali leads the technology development and security strategy at HoloMD, ensuring that product development strategy is executed successfully during agile sprints, while protecting sensitive medical data and ensuring compliance in the face of growing cyber threats. He focuses on mitigating risks and safeguarding healthcare technology infrastructure.