AI’s Increasing Role in the Health Care Delivery System: Key Legal Considerations

Updated on August 29, 2023

Executive Summary 

No personal services are more important than health care services. The use of artificial intelligence (AI), involving machines to perform tasks associated with human intelligence, such as reasoning and learning, is leading to an expansion of the term “personal.” Recent breakthroughs in generative AI, a type of AI capable of producing natural language, imagery, and audio data, have made the technology increasingly accessible to health care providers. As AI becomes progressively ingrained in the health care services industry, providers have the opportunity to harness AI to augment the existing care delivery system, and, in some cases, potentially replace existing human processes. This creates a growing need to rapidly build regulatory frameworks across the industry to monitor and limit the use of AI. 

In a recent Yale CEO Summit survey, 48% of CEOs indicated that AI will have its greatest effect as applied to the health care industry—more than any other industry. This Alert analyzes how AI is already affecting the health care industry, as well as some of the key legal considerations that may shape the future of generative AI tools.

The Emerging Regulatory Landscape

Government regulators and medical organizations are already setting guardrails to address the sometimes remarkably unreliable information provided by generative AI platforms. The American Medical Association (AMA) recently addressed the issue of medical advice from generative AI chatbots such as ChatGPT and intends to collaborate with the Federal Trade Commission (FTC), the Food and Drug Administration (FDA), and others to mitigate medical misinformation generated by these tools.  It also plans to propose state and federal regulations to address the subject.

Both the Department of Health and Human Services (HHS) and the Centers for Medicare and Medicaid Services (CMS) issued “AI Playbooks” to outline their positions on AI technology in accordance with the goals outlined in Executive Order 13960, titled “Promoting the Use of Trustworthy Artificial Intelligence in the Federal Government.” These playbooks are of increasing importance and essential reading for providers contemplating the use and effects of AI.

This government guidance is coming as the health care industry becomes more invested in AI technology. In 2019, the Mayo Clinic entered into a ten-year partnership with Google to bolster use of cloud computing, data analytics and machine learning. Four years later, the provider announced plans to utilize Google’s AI Search technology in creating network chat platforms with tailored individual user experience for its physicians and patients. Other companies are in the beginning stages of creating generative AI platforms targeting the health care industry. For example, Glass Health’s developing platform will utilize a “large language model (LLM)”. This consists of deep learning and voluminous data sets to draft health care plans and indicate possible diagnoses for patients based on short or incomplete medical record entries. Another example is Amazon’s recent announcement to invest $100 million to create the Amazon Web Service (AWS) Generative AI Innovation Center, which will assist customers in building and deploying AI solutions. Health care is one of the initiative’s primary focus areas. The HHS and CMS AI Playbooks will serve as key references during the development of these platforms and initiatives. 

Offloading the Administrative Burden

One of AI’s attractions in the health care industry is its potential to streamline administrative processes, reduce operating expenses, and increase the amount of time a physician spends with a patient. Administrative expenses alone account for approximately 15% to 25% of total national health care expenditures in the United States. The American Academy of Family Physicians reports that the average primary care patient visit lasts approximately 18 minutes, only 27% of which is dedicated to direct contact with the patient, whereas 49% is consumed by administrative tasks. Process automation of repetitive tasks, which does not involve AI, has long been part of the patient encounter experience, from appointment scheduling to the revenue cycle management process. Nevertheless, half of all medical errors in primary care are administrative errors. Deploying AI to initiate intelligent actions has the potential to reduce clerical errors and improve upon those currently-automated processes. 

Health care entities are already taking advantage of this emerging technology to increase administrative efficiencies. The prior authorization process can now be streamlined, allowing providers to submit prior authorization requests to payors electronically. Transcription services can now be automated using natural language processing and speech recognition, preventing human error and physician burnout—an issue of growing importance. Health care systems are also applying algorithms in surgical scheduling. An example is the analysis of individual surgeon data to optimize block scheduling of surgical suites, in some cases reducing physician overtime by 10% and increasing space utilization by 19%. 

Machine Empathy: Androids Dreaming of Electric Sheep

Can AI technology teach providers how to be more empathetic? While Philip K. Dick’s 1968 novel, Do Androids Dream of Electric Sheep?imagined a dystopian future in which AI was viewed as devoid of empathy, today the potential exists for AI to guide physicians’ positive behavior toward their patients. Though currently unconventional, AI has the potential to empower physicians to consider the impact their communications have on patients’ lives. With guidance provided by AI technology regarding how to broach difficult subjects, such as terminal illnesses or the death of a loved one, physicians may be able to more confidently and positively interact with others, building a deeper sense of trust with their patients. In the pre-AI world, positive communication behaviors were shown to repeatedly reduce the likelihood of litigation and reduce health care costs.

While ChatGPT’s shortcomings are well documented in other areas, a June 2023 study determined that ChatGPT was not only capable of formulating “thoughtful,” compassionate answers to patient questions or concerns, but in some cases, its answers were preferred overcommunications from physicians. The University of California San Diego research study compared responses to patient questions generated by ChatGPT against responses from human physicians, addressing simple ailments up to serious medical concerns. Feedback from participants indicated that the chatbot answers were rated on average seven times more empathetic than human responses. While machine-manufactured empathy may be anxiety inducing to many, AI need not replace physicians in conversations requiring clarity and compassion, but rather can serve as a complement to those interactions. 

“Dr. ChatGPT” – LLMs and A Call to Regulate

Generative AI chat tools may be useful for patients and physicians alike to locate and allocate resources, develop care plans, and diagnose and treat medical conditions. The American Hospital Association (AHA) analyzed the benefits of providers implementing systems compatible with AI technologies, highlighting the use as clinical support tools largely due to the high volume of data capable of being processed. The AHA still cautioned, however, that such tools will require continued monitoring from experts to ensure they are properly integrated into existing health care infrastructures and systems.

Indeed, expanding use of these tools in the health care space creates a significant issue: how to know and be confident that these tools are providing reliable information.  These concerns are particularly relevant in a digital age where ChatGPT has passed both medical school exams and the United States Medical Licensing Exam. Researchers found that the technology passed the exam without any specialized training or reinforcement. Is it appropriate to now utilize these tools for medical purposes?

As one potential answer, the August 2, 2023, publication of the preliminary results of a long-term trial of 80,020 women in Sweden determined that an AI-powered screen reading protocol for mammogram interpretations identified 20% more cases of breast cancer than standard screen readings performed by two radiologists. The researchers stated that the results indicated the use of AI in mammography screening is safe. Further, the screen-reading workload of radiologists was reduced by 44.3% using AI-assisted techniques, enabling the physicians to instead focus on more complex clinical interpretations.

As another potential answer, however, take the National Eating Disorder Association (NEDA)’s AI-powered LLM chatbot, “Tessa.” Tessa’s mission was to promote wellness and provide resources for people affected by eating disorders. Tessa was implemented to replace NEDA’s longstanding helpline in response to growing demand for the service. However, like other AI chatbots, Tessa’s responses were prone to “hallucinations”—techspeak for a chatbot’s inaccurate response. Moreover, Tessa came under scrutiny following reports from patient and doctors that the chatbot was not assisting individuals with eating disorders, but instead offering harmful dieting advice. Following the controversy, NEDA indefinitely disabled its chatbot.

NEDA is not alone in experiencing issues with generative AI-powered chat tools like Tessa. False or misleading information, particularly relating to medical information, leaves users vulnerable and potentially at risk. 

Providers can also face risk when relying on AI tools as part of their clinical decision making. A study published in the Journal of Nuclear Medicine investigated the potential liability for physicians using AI systems by evaluating hypothetical patient scenarios where an AI tool made a treatment recommendation against the recognized physician standard of care. The results indicated that people (e.g., jurors) tend to consider the following two factors for physicians using AI systems: “whether the treatment provided was standard and whether the physician followed the AI recommendation.” Tobia et. al., When Does Physician Use of AI Increase Liability?62 J. of Nuclear Med. pgs. 17–19 (2021). This conclusion suggests that physicians might face increased medical malpractice liability when accepting an AI recommendation that leads to a nonstandard care decision.

The full scope of liability arising from AI chatbot medical advice is still emerging, particularly when the chatbot is sponsored by a health care industry organization, but this is undoubtedly within the regulators’ sights.

Wearable Devices and Privacy Implications

From the invention of the mechanical pedometer in 1780 to current technology capable of detecting medical emergencies and chronic illnesses, wearable devices have become an integral part of today’s health care delivery system. The benefits of data derived from the devices cannot be overstated as patient care decisions can now be made with more speed and accuracy. The devices also serve to deepen the physician-patient relationship through more frequent interactions with the provider or staff that drive patient engagement in the care process. The origin of these technologies, however, is rooted in patient data-driven algorithms that range from demographic data to confidential medical information. 

The federal Health Insurance Portability and Accountability Act of 1996 (HIPAA) required the creation of national standards to keep protected health information (PHI) from being disclosed without the patient’s consent or knowledge. HIPAA and its corresponding state laws are the first line of defense for threats related to the collection and transmission of sensitive PHI by wearable devices. The Office of Information Security for HHS addressed these concerns in a September 2022 presentation, essential reading for health care data privacy and security experts, that calls for blanket multi-factor authentication, end-to-end encryption, and whole disk encryption to prevent the interception of PHI from wearable devices.

The FTC strengthened consumer protections against unauthorized disclosures of personal health records in 2009 when it announced its Health Breach Notification Rule (HBNR). This rule was designed to cover the entities that HIPAA does not, such as developers and vendors of mobile health apps and direct-to-consumer health technologies like fitness trackers. Acknowledging the growth of health apps, the agency has proposed modifications to the HBNR to keep up with technological advancements. 88 Fed. Reg. 37,753, 37,819–39 (June 9, 2023).

Litigation regarding AI data collection and use has begun. In one case, a recent class action lawsuit in the Northern District of California against OpenAI, the creator of ChatGPT, alleged, among other things, violation of users’ privacy rights based on data scraping of social media comments, chat logs, cookies, contact information, login credentials, and financial information. P.M. v. OpenAI LP, No. 3:23-cv-03199 (N.D. Cal. Filed June 28, 2023). The FTC has also initiated several recent enforcement actions under the HBNR against digital health companies that violated the requirements of the rule, resulting in hefty civil penalties in some cases.  In this context, the ramifications for misuse of PHI is significant.

Fraud, Waste, and Abuse Prevention

Companies are harnessing AI to detect and prevent fraud, waste, and abuse (FWA) in health care payment systems. Having reliable fraud detection systems is vital for providers to avoid legal liability where misuse of federal funds from programs like Medicare and Medicaid are involved, but also to help keep costs down. A report by the National Bureau of Economic Research indicated that widespread use of AI systems could result in savings of  5–10% in spending for healthcare in the United States. MIT researchers also reported that insurers indicated a return on FWA systems investment is among the highest of all AI investments. One large health insurer reported a savings of $1 billion annually through AI-prevented FWA. However, at least one federal appellate court determined earlier this year that one company’s use of AI to provide prior authorization and utilization management services to Medicare Advantage and Medicaid managed care plans is subject to a level of qualitative review that may result in liability for the entity utilizing the AI.  Doe v. eviCore Healthcare MSI, LLC, No. 22-530-CV, 2023 WL 2249577 (2d Cir. Feb. 28, 2023).

Conclusion

The effect of AI on health care will grow in scale and scope. New initiatives are announced daily as well as concomitant calls for regulation. Legislators and prominent health care industry voices have called for the creation of a new federal agency that would be responsible for evaluating and licensing new AI technology. Others suggest creation of a federal private right of action that would enable consumers to sue AI developers for harm resulting from the use of AI technology, such as in the OpenAI case discussed above. Whether legislators and regulators can quickly enact a comprehensive framework seems unlikely, but of increasing urgency. Legislators in Europe have been moving to address this challenge for some time. The rise of AI in high-risk health care treatment encounters has increased the requirement for accountable, equitable, and transparent AI design and governance.

Before utilizing generative AI tools, health care providers should consider whether the specific tools adhere to internal data security and confidentiality standards. Like any third-party software, the security and data processing practices vary. Before implementing the use of generative AI tools, organizations and their legal counsel should (a) carefully review the applicable terms of use, (b) determine whether the tool offers features that enhance data privacy and security, and (c) consider whether to limit or restrict access on company networks to any tools that do not satisfy company data security or confidentiality requirements. It is crucial that these protections be reinforced and augmented quickly as threat proliferation is, and will remain, a critical issue.

Douglas Grimm
Douglas Grimm

Douglas Grimm is a partner and Health Care Practice Leader at law firm ArentFox Schiff. Douglas advises health care providers on corporate and regulatory matters, including compliance, data privacy, and reimbursement.