By Michelle Garvey Brennfleck (Healthcare Industry Group Co-Leader), Carly Koza (healthcare associate) and Natalie Oehlers (FDA & Biotechnology associate) at Buchanan Ingersoll Rooney
Artificial Intelligence (AI) is a powerful tool that is reshaping the way healthcare providers diagnose, treat, and monitor patients while simultaneously decreasing costs and reducing potential liability. As AI continues to expand into the healthcare space, however, the lack of standardization for AI-based technologies continues to result in increased scrutiny not only by the industry itself, but also by legislators and policymakers. Additionally, there are a variety of legal and ethical considerations that raise substantial concerns and distrust from the public at large including those surrounding liability, safety and transparency, fairness and bias, informed consent, and privacy and security.
Traditionally, healthcare providers alone would assess a patient and deliver medical advice. Today, while AI does not replace provider knowledge, experience, and clinical judgment, AI can be, and, in certain circumstances, is a tool to deliver diagnoses and recommended treatments for individual patients in an expedited manner. AI is also used for easier and more efficient development of personalized support plans for patients’ continued care. Despite these applications, issues with the use of AI, particularly surrounding unrepresentative data sets, faulty classifications, and an inability to identify new incidents and bias, can lead to negative patient outcomes. For example, there are a variety of AI systems that improperly correlate amounts paid by patients for healthcare services against overall medical needs of a specific community. This, in turn, leads to significant bias against low-income populations, particularly those of minority communities, who have limited access to care due to financial vulnerability.
To combat these potential flaws, many companies, including those within the healthcare industry, have employed an additional layer of human review to confirm results garnered by AI. But, until further validation of the data sets and training models occurs, the belief that AI is simply a new source for inaccuracy and misdiagnosis in medical care likely will continue.
There have also been demonstrated privacy, security, and informed consent concerns surrounding the use of AI. Patients have a right to know how providers make decisions in their treatment and how their protected health information is disseminated for research, diagnosis, and treatment purposes. However, due to the limitations in transparency of design and function of AI systems, such systems are often difficult to explain because of the complexity of the pattern of interactions and factors used in making determinations. In addition, AI appears to have monetized data through potentially invasive practices to obtain the amount of data needed to power the AI. To address these concerns, many companies have established internal protocols and monitoring systems to create standards regarding disclosure, consent, access, correction, portability, and reasonable use of personal information.
Ever-changing AI systems also may make it difficult to trace and allocate responsibility for any harm or damage caused by use of the AI. Such systems may not be subject to negligence or product liability doctrines, or other traditional legal frameworks. This responsibility gap, in which neither the programmer, the manufacturer, nor the provider clearly has or accepts responsibility, becomes even more prominent in largely autonomous AI settings.
Regulatory intervention may be critical to mitigate the risks of AI to ensure that AI systems are safe for public use, especially in a healthcare setting. Members of Congress have met with CEOs, academics, and industry leaders in AI to draft appropriate legislation. Various bills or resolutions regarding AI have also been proposed or enacted at the state level, with approximately 20 states introducing and several states enacting such bills and resolutions in 2022 alone.
Moving forward, stakeholders within healthcare systems should be aware of these concerns and determine how to prevent and address issues that may arise. When implementing AI to provide diagnoses and recommend treatments, providers should consider the principle of informed consent, explain how the AI system works and disclose the scope and role of AI in determining courses of treatment. When implementing AI, healthcare entities should be aware of any bias or potential for inaccuracy the AI system may have and implement operating procedures that address these issues alongside those surrounding privacy and security of patient data.
The Editorial Team at Healthcare Business Today is made up of skilled healthcare writers and experts, led by our managing editor, Daniel Casciato, who has over 25 years of experience in healthcare writing. Since 1998, we have produced compelling and informative content for numerous publications, establishing ourselves as a trusted resource for health and wellness information. We offer readers access to fresh health, medicine, science, and technology developments and the latest in patient news, emphasizing how these developments affect our lives.