Start With Disclosure

Updated on June 5, 2023

Healthcare, patient safety and the rise of advanced AI

Healthcare rarely leads when it comes to the adoption of new technologies, but this moment is unique. Health tech innovators have the opportunity to set the standard for responsible use of generative AI by voluntarily disclosing when patients or health plan members are interacting with machines, rather than people. 

Recently, more than 27,000 researchers, entrepreneurs and thought leaders, including Steve Wozniak the co-founder of Apple, Elon Musk, CEO of Tesla, SpaceX, and Twitter plus dozens of noted professors from universities around the globe have publicly called for a six-month pause in the development of more advanced AI systems than the current GPT 4.

The group is expressing concern about the unchecked and unregulated advances in AI technologies that they say, “can pose profound risks to society and humanity,” which is both accurate and a scare tactic. AI is a tool – and a powerful one – but like any tool, its outcomes are driven by how people use it. I am an advocate for responsible AI and my company uses the technology to drive data accuracy automation in healthcare.

While I do share some of the concerns expressed in the letter, I also see opportunity. Policy always lags woefully behind technological innovation, yet leaders in regulated industries have historically stepped forward to create safeguards even before regulation requires it.

Sam Altman, CEO of OpenAI, the company behind Chat GPT, met with legislators in Congress in mid-May. But while Congress is catching up, it won’t be in time to protect patients from risk.

Altman suggested that technologies like his could provide medical advice for people who can’t afford care. But even he admitted that we “need enough time for our institutions to figure out what to do” and that “regulation will be critical.” The challenge? Regulation isn’t immediate — but disclosure can be.

In healthcare, generative AI or advanced AI may be the answer to many administrative burdens and may one day be an important part of both administrative tasks and clinical care. However, as these technologies are being perfected, it’s critical that we, as an industry, understand the appropriate roles for any new technology and assess the risk before deploying solutions that are untested.

Rapid advances in Large Languge Models (LLM), like those used in ChatGPT and Google’s Bard, make it incredibly difficult, if not impossible for patients to discern whether the advice being provided to them is coming from a licensed clinician or a bot. Generative AI is designed to sound both confident and authoritative, but the technology is not capable of delivering medical advice and shouldn’t be used that way.

Imagine a healthcare scenario where a patient chats with a provider help line to ask if two prescriptions have a drug interaction or not. With AI voice simulation technology, it may be difficult to deduce if the person on the phone is a licensed, human provider or a programmed      yet unregulated machine.

There are two phenomena at play here that are setting the healthcare ecosystem up for big problems, but also provide an opportunity for healthcare to lead in charge for responsible use of technology     

  • First, generative AI is trained to replicate language and deliver appropriate text responses – not synthesize information or be factually correct. Adding fuel to the fire is a relatively new phenomenon coined “hallucitations” by one of the leading scholars on the social and political implications of artificial intelligence, Kate Crawford. These bots are actually citing made-up sources and articles that don’t even exist to support their responses, making them appear even more valid.
  • Secondly, humans may be more inclined to believe a machine than a real human. In a phenomenon called “techno-chauvinism,” some believe in the infallibility of machines and superiority of technology-based solutions while placing less trust in humans.

Healthcare can lead with disclosure                         

Legislation will take months if not years, but the healthcare industry can set the standard now. Voluntary disclosure is simple – let patients know at the start of every interaction if they are engaging with a machine or a human.

The disclosure should include 1) that the patient is interacting a Generative AI bot, which should not be used for clinical support and 2) disclose the software system/company responsible for the content and development of that bot.      

It could look as simple as this:

“Hi, this is Sally. I am a robot powered by ChatGPT. While I can answer some of your questions, I am not a healthcare provider and cannot give you clinical advice.”

Meghan Gaffney headshot copy
Meghan Gaffney

Meghan Gaffney is the CEO and co-founder of Veda, a company that brings science and imagination together to modernize ​healthcare with human-in-the-loop smart automation. Veda saves healthcare companies millions and makes it easier for patients to access care.​Meghan has over 15 years of experience working with elected officials and mission-driven organizations, as well as consulting on technology opportunities. She is a passionate advocate for artificial intelligence and machine learning and believes these technologies will create unprecedented economic opportunity for the United States and the world.