Overcoming AI’s trust problem in healthcare

Updated on September 30, 2025

The potential for AI in healthcare is immense. It can impact all facets of the industry, from clinics and patient care to pharmaceuticals and health insurance. One of the most promising applications is in care personalization and thinking about how tough diagnoses like diabetes or cancer could benefit from a much more tailored approach for each patient. Of course, there are more prudent, under-the-radar uses like relieving physicians of some of their administrative burden or streamlining insurance claims. Applications like these may be easier to implement in the near-term, while still offering substantive impact in the long-term. Increasing efficiency could allow providers to spend more time on patient care and help lower costs, leading to more accessible and equitable patient care across demographics.

The American Medical Association recently published new survey data showing that physicians’ confidence in AI is on the rise. This is an encouraging development, given that practitioners are crucial for AI adoption and implementation. Despite the rapid adoption and potential promise of AI in healthcare, public perception hasn’t been so keen. Another study showed that Americans would rather donate blood than share their health data willingly – a considerable indicator of patient trust, or the lack thereof. This poses a challenge, as robust, high-quality health data is essential for breakthrough innovations like AI-assisted diagnostics, new treatments, and even population health applications that rely on large amounts of information to mitigate emerging public health threats.

Trust will be just as important to invest in as technical development. Addressing these concerns is a necessary step toward building meaningful AI solutions in healthcare.

Understanding the apprehension

Since there have been very public, prior mishandlings of user information in healthcare, AI solutions need to bear in mind that there will be skepticism. As new technology becomes more common in healthcare decisions, questions about privacy, consent, and equity will continue to impact adoption. While studies show that the majority of people are comfortable sharing health data for personal care, there is a deep skepticism toward the commercial use of personal information. 

There is also a nuanced distinction in how the public perceives the use of AI by physicians. Most Americans are comfortable with AI tools that assist doctors and are not comfortable with technology replacing care providers in most facets of the job. For instance, only 28% of people would accept a prescription written solely by AI. This underscores that the acceptance of AI is conditional, shaped by how the technology is implemented, and if it’s being used to augment human judgment.

Understandably, attitudes also differ by age, especially when it comes to health data use by the government or insurance companies. Over half of Gen Z adults, who grew up in a digital world where data is commonly shared, are comfortable with the government using their health data for policy. Comparably, only 36% of older adults agree. Health insurers in the US have already seen falling brand and customer experience scores which only compounds existing uncertainty.

Addressing the root cause 

Trust is fundamentally built on transparency and accountability, and a lack of it can stagnate innovation. Though AI offers significant potential, successful adoption will be driven by human acceptance. There are entire fields of study dedicated to human-centered artificial intelligence, and at its core, it highlights the need for clear communication, concise applications, a robust AI framework, and demonstrable value for users. Anonymization, obfuscation, and the use of synthetic data become new opportunities to harness the potential of AI while respecting the privacy of user information. 

It is also imperative that any clinicians using AI tools are adequately trained. In the American Medical Association’s augmented intelligence research, physicians highlighted proper training and education on AI tools as one of the top components for AI adoption in practice. This includes an understanding of AI’s purpose and limitations, when to actually use it in real scenarios, and how to interpret and validate the results. Trust is reciprocal – if a provider feels confident and can articulate the benefits of AI-assisted tools to a patient, acceptance is more likely to follow.

Given that an individual’s healthcare is inherently personal, there is a lower tolerance threshold. This could lengthen the adoption curve and require more AI-assisted human decision making from the provider side for a longer period to develop the same trust in AI that other fields or applications, like customer service, are seeing. That is why ensuring that AI lives up to its capabilities in driving healthcare transformation will not only require mindset shifts, but also systemic shifts in the way things are done. The World Economic Forum recently published a white paper that details some fundamental changes that need to happen for widespread adoption and success. Among the many trends and ideas it laid out, the message was clear: both the public and private sectors have an opportunity to reimagine how healthcare is made available and delivered. 

Curing the AI skepticism 

Establishing trust must be a foundational priority. The most promising new AI tools need to balance a respect for individual rights and deliver meaningful benefits. It starts by addressing the apprehension: privacy, data governance, clinician training, equitable access, and clear regulatory frameworks. The industry must create a system that values patient engagement, embeds provider transparency throughout processes, and drives AI-enabled outcomes that are both effective and trusted.

Nick Magnuson
Nick Magnuson
Head of AI at Qlik

Nick is the Head of AI at Qlik, executing the organization’s AI strategy, solution development, and innovation. He joined the company through the acquisition of Big Squid where he was the firm’s CEO. He has previously held executive roles in customer success, product, and engineering in the field of machine learning and predictive analytics. A practitioner in the field for over 20 years, Nick has published original research on machine learning and predictive analytics, as well as cognitive bias and other quantitative topics. Nick received his AB in Economics from Dartmouth College.