The Ethics of AI and Data in Healthcare

Updated on November 6, 2022
Michael Armstrong 63 of 308 copy

Artificial intelligence (AI) is playing a much more prominent role in all areas of healthcare — from patient communication to claims processing and records management to disease detection, and so much more. 

While AI’s implementation will continue increasing healthcare effectiveness and efficiency, it comes at a cost to patient privacy. In 2021, healthcare security breaches affected 45 million people — an all-time high. 

To protect patient information and communications, healthcare organizations must build a foundation of trust, compliance and transparency — especially where AI is concerned.

AI’s connection to data

What is central to the delivery of care? Data. AI and machine learning (ML) algorithms have driven tremendous advances in predictive and big data analytics.

And because AI-powered technology is increasingly helping facilitate access to patients’ medical data —  including information exchanges between patients, physicians and others on the medical teams — protecting individuals’ information and privacy is even more important. 

There’s been a renewed focus globally and in the United States on identifying data privacy and security risks and establishing solutions to safeguard the vast sums of data generated by healthcare organizations and other companies using or selling AI-based healthcare products.  

State privacy laws and regulations, as well as the Health Insurance Portability and Accountability Act (HIPAA), may also factor into healthcare data governance. But unlike other technology, AI isn’t merely a tool. In some cases, it augments or outright replaces human judgment. 

AI’s benefit to healthcare is helping generate positive patient outcomes. Yet its biggest drawback is the lack of process around its build, application and deployment. So where does the division of responsibility between technology and people fall?

Challenges protecting health information

As healthcare AI and ML advance rapidly, the conversation has expanded to include how best to manage its development, including addressing privacy issues related to data security. 

AI and ethics

From an ethical standpoint, there exists the potential for cognitive or algorithmic bias. While AI’s function and accuracy rely on its datasets’ validity, AI requires massive amounts of data, making it difficult — if not impossible — to eliminate bias completely. 

Biases stem from two reasons: 

  • Cognitive biases, or unconscious thinking errors resulting from individuals’ decisions and judgment. They infiltrate ML algorithms if designers unknowingly introduce them to a model or from training sets that already include biases. 
  • Incomplete data that isn’t representative of an entire population and which, therefore, includes bias.

Biases can contribute to differential treatment and negative patient outcomes. It’s incredibly challenging to eradicate from AI systems, and organizations struggle to identify, measure and manage these biases.

AI and patient privacy

Healthcare organizations often use de-identification before accessing and using specific patient data. But its meaning varies depending on each dataset’s particular laws or regulations. HIPAA, for example, defines de-identified data as data with various identifiers removed, including:

  • Names, birth/admission/discharge/death dates and ages. 
  • Telephone, fax, social security, medical record, account, certificate/license, health plan beneficiary, license plate, vehicle identifier, device identifiers and serial numbers.
  • Email and postal addresses.
  • Web universal resource locators (URLs).
  • Biometric identifiers, including voice and fingerprints.
  • Full-face photographic and comparable images.

Removing these identifiers mitigates privacy risks to individuals and allows organizations to share patient data without potentially violating HIPAA — which was introduced without AI and ML in mind. This de-identification is especially important for organizations dedicated to medical research and treatment.

Yet AI products can erect unique roadblocks to the de-identification process. As AI-powered tools proliferate, the amount of data or new data elements increases and is added to the AI system, leading to potential privacy issues. And although de-identified data is used initially to address potential biases, as more data is added to the AI system, the probability of adding identifiable data also increases. 

Organizations must continually assess potential risks and consider whether their AI systems are generating more identifiable — not de-identifiable — data as they scale. Another reason this additional identifiable data could cause issues for healthcare organizations? Technology using and relying on protected health information (PHI) presents an attractive target for cybercriminals. 

The sheer magnitude of data, emerging technology able to identify previously de-identified data, constantly evolving regulatory landscape with its patchwork of laws, and incredible volume of third-party vendors with access to sensitive data make AI a unique risk in the healthcare industry.

Value of AI

And yet, challenges aside, the value AI brings to healthcare is impossible to quantify or overstate. AI offers countless opportunities and use cases in healthcare. Its algorithms can be applied to innumerable care and research settings. It opens doors to:

  • Increased and enhanced data analysis.
  • Improved patient experiences.
  • More strategic, informed decision-making.
  • More efficient workflows.

One of the biggest challenges healthcare organizations have faced — a sort of ‘last mile’ problem — is patient engagement and adherence. This barrier between negative and positive health outcomes often becomes apparent with patients’ challenges when attempting to communicate with their healthcare providers. 

Yet the more proactively patients participate in their healthcare journey, the better the outcomes, whether health, financial or experience. And AI is a rising star in addressing some of those challenges. 

One area that benefits especially from AI is the analysis of unstructured conversational data. Every time patients call their healthcare provider, the conversation is recorded. AI enables the analysis of each interaction and the compilation of large datasets from which healthcare leaders can extract insights to inform strategic business decisions, identify personnel training needs, recognize pain points along the patient journey and more.

Tools exist to mitigate potential security and privacy risks generated from collecting this valuable data, including voice obfuscation, redaction, aggregated trends and anonymization. Other steps organizations should take — especially until regulations guiding AI use within the healthcare industry become standardized —  include championing transparency, getting patient consent and assessing third-party vendors before implementing new technology infrastructure.

AI offers nearly limitless potential to transform the healthcare industry. It can revolutionize diagnostics and treatments and improve patient experiences. But decision-makers at all levels of the organization must evaluate each AI tool’s safety and security, especially where private patient data is concerned. 

Michael Armstrong, CTO

Michael Armstrong is the Chief Technology Officer at Authenticx and a foundational leader in building our solution infrastructure. In this role he leads a team of data engineers and scientists to translate big visionary ideas into practical and actionable software. Michael has extensive experience in engineering, data architecture, product development, and business intelligence.