We Cannot Afford AI-Washing in Healthcare

Updated on July 23, 2024

It’s not hard to guess what motivates companies to overstate or make inaccurate claims about their use of AI. Having an AI-based or “AI-powered” product today means a company might secure more interest from investors, it may be seen as more technologically capable or credible, or it may benefit in other ways from the current wave of excitement about this tech. 

You’ve probably even experienced some effects of the AI land grab yourself. The sheer volume of content and marketing today around the term “AI” can be exhausting. If first reading this headline triggered an involuntary eyeroll, I wouldn’t blame you. 

But beyond feeling haunted by an omnipresent acronym, AI-washing has staggering potential to derail entire industries from using and benefiting from effective technologies. It could even lead to another AI winter. For those of us that have been in the industry for awhile, we all remember the end all be all product that rhymes with Datsun and had a similar fate.

We must take AI-washing seriously. In no space may the concept be more worrisome than in healthcare, where human lives hang in the balance.   

The Danger of Driving Regulators Askew

Regulators have a sizable task ahead of them in determining the appropriate guardrails and boundaries needed for AI technology in healthcare. And that’s assuming the information they’re getting is correct. I’ve personally heard rule makers deliver false equivalencies between social media and AI, and that in and of itself tells me the right information isn’t getting to the right people with regards to policy.

Companies overstating their AI capabilities will be difficult for regulators to accurately assess and could lead to unsafe products finding their way onto the market. Eventually as these products fail or lead to negative outcomes, they will impact trust and adoption of AI in the overall space for years to come. 

As another consequence, this misalignment could lead to the development of ineffective guidelines, and regulations that are either too lax or too stringent.

To prevent this, there are two popular forms of AI-washing in healthcare that regulators must remain especially cautious of. First, we must remain on high alert for companies overstating their solutions’ abilities to battle bias. As awareness around the risks of biased data influencing AI in healthcare have become widespread, so too have assurances from developers that their solutions address this deeply rooted issue. Misleading stakeholders about a product’s capability to handle and overcome potential bias can amplify existing biases and escalate ethical issues, eventually reinforcing mistrust in AI. 

We see a similar tendency for companies to overstate the originality of their models. Some developers are reticent to admit how large a role upstream models play in their AI product. There’s this desire to be seen as having developed an original model, tailor-made for its use-case – but deceptive messaging about a model’s origin has real market consequences.

This phenomenon leads regulators to believe risk is diversified when in fact, it’s not. A few upstream models hold the ability to disrupt all downstream users should they fail in any way. SEC Chair Gary Gensler recently remarked about this in a speech to Yale students stating, “such network interconnectedness and monocultures are the classic problems that lead to systemic risk.”

Potential Risk of Blocking Better Uses of Resources & Innovation

Regulators won’t be the only victims, far from it. Providers and plans who have direct interactions and responsibility towards patient and member health, will be on the front lines too. 

By adopting tools based on inaccurate information, under the impression they are more advanced or capable than they may really be, they risk allocating resources away from better investments and genuine innovations. On an individual scale, this distraction from more worthwhile projects impacts patient and member health. On the mass scale, it slows the progress of global healthcare overall. 

To wade through the many AI solutions marketed to them today, providers and plans should institute a validation process that begins with awareness that AI-washing is real. Decision makers should take a skeptical approach to any solution that points to complex technology as their backing without being willing to explain it in more depth. If you’re evaluating solutions, do your research on the company, and check out their website and Linkedin. Were they talking about AI before ChatGPT hit the scene? This can help point out whether there’s a degree of bandwagoning occurring. 

When presented with a solution, ask how it works, how it was developed, what datasets it leverages. The more questions we ask, the more we can break through and potentially identify where fact and fiction lie. If you feel that a salesperson or executive is being overtly vague or repeatedly avoiding a question you’re asking, turn elsewhere.

And even post-implementation, rigorously test and measure. Take an evidence-first approach and measure against set KPIs. If a solution’s capabilities have been overstated, the sooner it can be identified, the sooner resources can be saved and potentially invested in better technologies that will improve outcomes. 

People Can Be Hurt, Plain and Simple. 

The sad truth is that members and patients will be the ultimate victims of AI-washing in our industry, and by the time we realize it, it will likely be too late. The resources already wasted, members and patients already impacted. 

Alarms are being raised weekly about the consequences of whether new AI solutions are as built out and capable as they claim. Nurses in San Francisco recently protested over this very concern. 

In addition to the impact on health outcomes, overstated AI solutions will breed further mistrust in our healthcare system. 

So, we must retain high standards for AI health tech and look to evidence to prove impact. The term AI should not inherently garner more interest from investors, nor be automatically seen as more credible. 

When it comes to technologies that can save lives, every ounce of trust must be earned.

Trey Sutten
Trey Sutten
CEO at 

Trey Sutten is CEO of Siftwell.