Why Patients Living with Serious Mental Illness May Be Vulnerable to AI-Generated Health Information

Updated on December 15, 2025

Artificial intelligence (AI) tools have become part of daily life. For many, AI can be a helpful starting point for information given its capacity as a high-powered search engine. For people living with serious mental illness (SMI), however, it can also introduce a unique set of risks. 

OpenAI, the company that created ChatGPT, recently shared that 0.07% of its 800 million weekly users show “possible signs of mental health emergencies related to psychosis or mania.” That’s up to 560,000 people every week who may be interpreting AI-generated content through their personal experience of delusions, paranoia or impaired insight.

Individuals living with SMI can already have heightened risk—due to the nature of and symptoms associated with their condition—of being disproportionately influenced by external stimuli. Moreover, AI-generated content is still widely acknowledged as being potentially incomplete or even inaccurate, and more of a starting place than an end point. If a person is perhaps less likely or able to confirm the reliability of information they receive, it may result in patients being more reliant than they should be on that information. On one hand, if a person is experiencing paranoia, they might not believe an answer that is accurate, or conversely, they might believe an AI response that is inaccurate. Either scenario could present challenges to that person’s wellbeing.

When AI gives just enough information to cause problems

For individuals experiencing delusions, AI tools may reinforce distorted thinking by providing just enough language or partial facts to match what they’re looking for, inadvertently contributing to confirmation bias. Because AI can generate endless variations on a theme, people will eventually find what they’re looking for if they keep trying. 

For example, patients will sometimes look up a medication and see information that sounds alarming, such as content about side effects they may be worried about, without understanding the clinical context or who that information applies to. That’s where AI gets tricky: it can take one piece of information and present it as if it applies across the board and to everyone. And because it delivers information in the same confident, conversational way, regardless of the query, patients may not realize that part of the answer is missing or that it’s not fully accurate. This can create a false “hit,” and some individuals, especially those who are vulnerable, may not be able to distinguish the differences.

When AI starts to compete with clinical context

This is where the use of AI in a healthcare context can become complicated for both the patient and the clinician. When a patient brings information from an AI search into the exam room or clinician’s office, it often is presented as fact or a substantiated third-party recommendation that they’re putting a lot of weight on. This presentation can make the discussion a bit testy because as clinicians we’re trying to validate their questions and support the initiative they’ve shown while also pointing out that the information they received, which felt authoritative, may not be complete or accurate.

Effective therapeutic relationships are built on trust, and AI can unintentionally wedge itself between patients and their clinicians. As clinicians, we support patients doing their own research and advocating for themselves as these are good things. However, it’s also important that we help patients see the limitations of digital sources because AI doesn’t know their personal or health history, doesn’t understand all their symptoms or how their illness shows up for them, and doesn’t know what medications they’ve tried or the context that goes into making a treatment decision. That’s the part patients can miss and where healthcare providers need to step in and give the full picture. 

For patients living with SMI, they may not be able to understand why information read online doesn’t apply to them or shouldn’t be treated as definitive without first talking with their provider. If their thought process is already distorted, even slightly misleading AI content can feel like confirmation of what they already fear or believe. This is why involving a healthcare provider in diagnosis and treatment is essential. 

Mental illness isn’t something that can be evaluated by a single line of text or a generic answer. Psychiatry is a field where nuances and conversations matter. There’s no algorithm that can replace hearing someone’s history, understanding their social context, recognizing their patterns or noticing subtle changes in their presentation. These are things AI simply cannot pick up.

Where treatment structure can help

One aspect of psychiatric care in which I find the in-person patient-provider dynamic to be especially critical is in treatment planning and decision-making. Treatment guidelines agree that adherence to medications as prescribed is important for symptom management and overall outcomes. With oral medications, in particular, there are many factors that can influence inconsistent medication use. The potential for information generated through an AI search to contribute to a skipped dose or even abrupt discontinuation concerns me as a clinician. 

This is where long-acting injectable (LAI) medications, such as ARISTADA® (aripiprazole lauroxil), should be considered and may make a meaningful difference for appropriate patients. LAIs may reduce day-to-day decision-making through structure and consistency. And, requiring administration by a healthcare professional on a regular schedule, they build in natural patient-provider touchpoints. When a patient is receiving an injectable medicine, their care team knows if they’ve received their medication, that medicine is on board for the full dosing period and can monitor how the patient is doing in order to catch issues early. 

In my experience, AI, or any potential outside influence, has fewer chances to disrupt adherence for patients on an LAI. With an injectable, the treatment plan isn’t relying on 30 separate daily decisions about whether to take a pill. It’s one scheduled visit and then ongoing monitoring. This structure can be protective for medication adherence, and by extension protective of the person.

Simple steps that could make AI safer

What I would like to see are AI platforms implementing simple disclaimers for healthcare- or medical-related search results. For example, including a response such as, “This is artificially generated information. Please see your healthcare professional for confirmation” would be incredibly beneficial. This disclosure would function as a buffer, reminding people AI-generated information does not reflect the full picture.  For individuals living with SMI who may be more vulnerable to misinformation, even a small reminder like that could help.

AI is here to stay, and it’s becoming part of how people look up information about their health. But for individuals with SMI, the chance of misunderstanding is potentially much higher, driven by the realities of their condition. When you combine baseline vulnerability with limited insight and the way AI delivers everything with such apparent confidence, it can lead to real problems. That’s why clinicians must handle these conversations carefully. We have a responsibility to help patients sort through what they’re seeing and hearing so that they can continue to feel empowered along their healthcare journey. 

Robert Miller
Dr. Richard W. Miller
Psychiatrist at Elwyn Adult Behavioral Health
Richard Miller, MD, is a psychiatrist who specializes in treating schizophrenia.