Artificial intelligence has dominated, and will continue to dominate, technology narratives throughout 2024, as organizations and industries look to capitalize on the opportunities which AI can present and respond to the threats it can create. The effects of AI will be keenly felt across cybersecurity teams, as threat actors make the most of new technologies to drive innovative attack methods, and potentially increase the speed, variety, and severity of cyberattacks.
Healthcare is one industry where the severity of cyberattacks is already a concern. The recent Change Healthcare ransomware attack, the biggest attack ever experienced by the healthcare industry, serves to highlight just how vulnerable the industry is to cyber incidents – and how critical it is to keep it safe.
The temptation for healthcare providers to embed AI into their offerings and patient services is strong. But there are some important issues which need to be taken into consideration before organizations do so.
Consider your existing technology supply chain and developer ecosystem
The risks involved with adoption of AI within healthcare are two fold, as with most organizations. There are the developers and IT administrators within an organization that work to create and manage software that is amenable to increasing efficiency in providing healthcare. Then, there are the technologies the organization adopts to improve and secure their data – their software supply chain.
The latter lends itself to be the greatest risk at this time, with healthcare partners in the healthcare software industry making investments in Large Language Models (LLMs). The EMR data supply chain is quite real, with specialized services outsourced to different organizations, such as speech-to-text recognition, image processing, or labs analysis. As different providers in this chain look to add AI technologies to make their processes more efficient, health care providers may not know the full extent of the risks associated with this supply chain.
Protecting patients and their data is paramount for a health care system. Improving the overall cybersecurity posture of your organization should be top priority before looking to make any additional investments into AI.
Ensure you’ve considered privacy best practices and bias
When developing any application that healthcare organizations can adopt, it is absolutely critical to ensure that PHI and PII data is properly handled according to HIPAA and that the data in which the AI is trained is handled securely and is unbiased. Large language models are continually trained with updated and improved language data; patient-provider interactions must never be used to train such models to ensure such private interactions do not ‘leak’ into future versions of the LLM.
As has been shown in other fields, such as law enforcement and prison sentencing guidelines, deployment of AI technologies into situations affecting people’s lives must be done with great care, compassion, and consideration. Underrepresented groups could see worsening health outcomes if health care practices blindly follow recommendations from models which were trained with biased data. Healthcare is a nuanced and personalized field, AI deployments must take this into consideration and those integrating the AI must actively work to contradict biases.
Know your enemy – and know your budget
Threat actors are already leveraging AI for malicious purposes. As we’ve seen with the advancements in AI, threat actors have leveraged sources like CrewAI for spear phishing campaigns, making it even easier for infiltration and then later the exfiltration of login credentials and private data. There will be a constant push and pull relationship between advancements in technology and the budget within healthcare organizations to adopt them. With healthcare being a primary target for attackers, it will be vital for healthcare organizations (meaning the administration and boards) to finally start taking their infrastructure as their number one vulnerability and invest in proactive cybersecurity.
Unfortunately, we will likely see the impact of weak cybersecurity programs for years to come. Healthcare organizations will continue to be targeted for their confidential patient data, and entire healthcare systems will shut down due to ransomware attacks.
Threat actors understand that healthcare is a critical part of local, national and global infrastructure. They know that systems need to be kept online, in order to provide for continuation of care. This means that healthcare is viewed as an easy and vulnerable target. AI is not the silver bullet that will keep healthcare safe, and those approaching it as this kind of solution should exercise extreme caution.
Investing in core cybersecurity practices and upkeep, such as securing and updating critical systems, moving users to multi-factor authentication with strong passwords, and monitoring for unauthorized access will create a solid foundation on top of which AI can be added to improve patient care.
Sean McNee
Sean McNee is VP of research and data at DomainTools.