The mental health crisis continues to rise in the U.S., and traditional care systems struggle to meet the growing need. Too many people wait weeks or months for appointments. Too many clinicians carry overwhelming caseloads without the capacity to respond to everyone seeking help. This isn’t about provider commitment — clinicians are working harder than ever. The issue is systemic: Demand has outpaced the resources and infrastructure needed to deliver timely care.
AI monitoring tools are emerging as one way to address these operational challenges. These tools don’t replace the therapeutic relationship between patients and clinicians. Instead, they help match available clinical resources to patient needs in real time. Everyone gets care while ensuring the most urgent cases receive immediate attention.
Iris Telehealth’s 2025 AI & Mental Health Emergencies Survey reveals that many Americans are open to AI monitoring their mental health, but they draw a clear line: humans must remain in charge of all care decisions. This finding offers healthcare leaders a path forward, one that uses technology to enhance access while preserving the human connection that makes behavioral health care effective.
The Scale of the Challenge
A 2025 study found that nearly one in 10 adults experienced a mental health crisis in the past year. Another one reported that more than one in eight Americans aged 12 and older experienced symptoms of depression between 2021 and 2023, nearly double the rate from a decade ago.
The system can’t keep pace with this surge in need. Mental Health America reports 17.7 million adults experienced delays or cancellations in their mental health care, and 4.9 million couldn’t access needed care at all. These numbers indicate that people in crisis are waiting days or weeks to see someone, and some never connect with help.
This is where AI monitoring tools can make a difference — not by providing therapy or treatment, but by helping overstretched care teams identify who needs immediate attention. When providers can quickly spot patients at highest risk, they can prioritize appointments and outreach more effectively.
How AI Can Support Overstretched Care Teams
AI-driven monitoring tools excel at tasks that consume significant clinical time: scanning for patterns across large patient populations, flagging changes in behavior that might signal escalating risk, and helping care teams prioritize who needs immediate attention.
For example, these tools can analyze medication histories, therapy patterns, and clinical assessments to identify when a patient’s mental health was at its strongest — insights that would take clinicians hours to piece together manually.
These tools can track metrics that would be difficult for clinicians to monitor manually across hundreds of patients — changes in digital check-in frequency, shifts in voice tone during telehealth appointments, or altered typing patterns that might indicate distress. The goal would never be to replace clinical judgment. These tools are meant to give providers better information to make faster decisions about resource allocation.
In our survey, nearly half (49%) of Americans indicated they would use AI tools to monitor their mental health, viewing it as a practical way to speed up detection and access to care. Their caveat, however, was that a human should always make the final care decisions. AI can observe and flag concerning patterns, but clinicians need to review those alerts, talk to patients, and determine the appropriate course of action.
Where Americans Set Boundaries
Privacy concerns typically dominate conversations about AI. Pew Research shows 71% of adults worry about AI misusing personal data — except for when it comes to mental health. When the stakes involve potential crisis or self-harm, many people are willing to share data with monitoring tools that could help them get timely support.
Despite openness to AI-driven monitoring, Americans want human connection. About three-quarters (73%) of Iris survey respondents prefer a human provider to have the final say if AI flags an emergency. If action is needed, 28% want a pre-selected family member or friend contacted, 27% want a trained counselor to respond within 30 minutes, and 32% want full control over seeking help. Just 22% trust AI to connect them to a professional without consent.
What It Takes to Build Trust
For AI monitoring tools to gain widespread acceptance, specific safeguards must be in place. The same Iris survey found that more than half (56%) say it’s extremely important that AI explain its reasoning when flagging high-risk cases. A third (32%) want a licensed therapist to review every AI recommendation before action. Another quarter wants full control over monitoring, and 16% want the ability to override AI decisions. When mistakes happen, 42% believe both AI developers and healthcare providers should share accountability.
These responses are conditions for adoption, but comfort levels and requirements vary significantly by demographic group. This has direct implications for how healthcare organizations should approach implementation.
Gender
Men show more openness to AI monitoring overall (56%) compared to women (41%). Women are also more likely to insist on human oversight, with 78% wanting human providers to make final decisions in AI-flagged emergencies versus 68% of men. This suggests messaging that emphasizes efficiency may work for male patients, while messaging focused on safety protocols and clinical oversight may be more effective for women.
Generation
Nearly three in ten millennials (29%) and a quarter of Gen Z (24%) feel very comfortable with AI detecting crises, compared to just 5% of boomers. Younger users are open to more automated responses, while 74% of boomers prefer maintaining full control over help-seeking. Healthcare organizations may need phased rollouts that start with younger demographics while preserving traditional pathways for older patients.
Income
Income patterns challenge typical assumptions about early technology adopters. Lower-income adults show the highest receptivity, with 61% willing to use AI monitoring tools versus 44% of the highest earners. This suggests these tools should be positioned as accessible alternatives to expensive traditional care rather than premium innovations.
What Healthcare’s Leaders Should Consider
Healthcare organizations evaluating AI crisis detection tools need to address patient concerns head-on:
- False positive fears: Approximately 30% worry about false positives leading to unnecessary interventions. This means AI systems must meet strict validation standards with thresholds that require multiple data points before triggering alerts, and guaranteed human review within defined timeframes for every AI-flagged case.
- Technology creating distance: Nearly a quarter of people (23%) fear technology will create distance between patients and clinicians. This concern underscores why human oversight needs to be built into every stage of the process. AI should accelerate detection and help prioritize who needs immediate attention, but clinicians must review assessments, make treatment decisions, and maintain direct contact with patients.
- Integration preferences: One-third of consumers are more likely to use AI mental health tools if they’re embedded in systems they already know, like Epic MyChart portals or Zoom telehealth sessions, rather than requiring them to download separate monitoring apps.
- Demographic-specific strategies: Implementation strategies should also account for demographic differences. Messaging that emphasizes efficiency and innovation may resonate with male patients and younger demographics, while approaches focused on human oversight and safety protocols may be more effective with women and older adults. Starting with tech-comfortable younger users before expanding to older generations could ease the transition.
AI won’t solve the mental health crisis, but it can dramatically help overstretched care teams work more efficiently. When implemented thoughtfully, with proper safeguards and human oversight at the center, AI-driven remote patient monitoring can help identify people who need urgent attention and connect them to the human care that makes recovery possible.

Andy Flanagan
As CEO, Andy Flanagan is responsible for Iris Telehealth's strategic direction, operational excellence, and the cultural success of the company. With significant experience in all aspects of our U.S. and global healthcare system, Andy is focused on the success of the patients and clinicians Iris Telehealth serves to improve people’s lives. Andy has worked in some of the largest global companies and led multiple high-growth businesses providing a unique perspective on the behavioral health challenges in our world. Andy holds a Master of Science in Health Informatics from the Feinberg School of Medicine, Northwestern University, and a Bachelor of Science from the University of Nevada, Reno. Andy is a four-time CEO, with his prior experience including founding a SaaS company and holding senior-level positions at Siemens Healthcare, SAP, and Xerox.






