By Clifton Roberts
A little over a year ago, my daughter, Kendall, checked herself into an emergency room because she felt ill. She advised the ER staff that she was hearing voices and felt like hurting herself. Two hours after checking in, she was discharged with a brochure in hand about suicide prevention and the number for a toll-free support hotline. Four hours later she did the unthinkable and took her own life.
As a father, I remain devastated. As someone who works in the technology industry, I believe my daughter could have been helped, and perhaps saved, by data and technologies like machine learning and artificial intelligence (AI). A range of AI-enabled technologies for healthcare can assist with remote diagnosis, disease mapping and prevention, and identification of populations susceptible to certain dangerous conditions. Why not apply these to mental health also? AI, particularly machine learning and analytics, could contribute much here, but progress has been slow.
Here is an alternative scenario that could have occurred on the night of my daughter’s distress:
As my daughter approached the triage counter, staff could have authenticated her ID biometrically and quickly pulled up her history of mental illness, leading to an initial diagnosis. After being identified as potentially dangerous to herself or others, she could have benefited from technologies like machine learning. AI algorithms could have immediately inferred a more precise medical diagnosis model based on plugging her self-described symptoms into a model trained on existing national health-related data. These AI inferences would have led to a diagnosis of suicidal ideations or one within the range of codes that deal with cognition, perception, emotional state, and behavior. Based on this data-driven diagnosis, she would have been admitted and treated, rather than turned away with a brochure and a phone number.
This isn’t just a grieving father’s fantasy. Good predictive capabilities in the area of mental health are beginning to be available, and others will be soon. [Scientists are aiming to use AI to catch depressive behavior early with the hope of averting severe mental illnesses.]
For example, a machine learning algorithm created at Vanderbilt University Medical Center uses hospital admissions data, including age, gender, zip code, medication, and diagnostic history, to predict the likelihood of any given individual taking their own life. In trials using data gathered from more than 5,000 patients who had been admitted to the hospital for either self-harm or suicide attempts, the algorithm was 84 percent accurate at predicting whether someone would attempt suicide the following week.
Recently, scientists at Stanford School of Medicine used AI to analyze brainwave patterns in people diagnosed with major depression. The aim was to determine which patients responded well to sertraline, a common type of antidepressant. With this kind of information, researchers hope to be able to personalize mental health treatments. They emphasize, however, that society must demand that these kind of study results get used in clinical care.
Another recent study, from the University of California, Los Angeles (UCLA), used machine learning to develop an app called MyCoachConnect. Patients with serious mental health illnesses called a toll-free number once or twice a week and answered three open-ended questions about how they were feeling. The app then tracked individuals’ own words to analyze how their responses changed over time, allowing doctors to intervene as soon as serious changes were noted. The individuals participating said they spoke more freely to a computer-generated voice and felt less lonely because they knew that someone would be listening.
In the year since Kendall’s passing, I celebrate her memory, inspired by the hope that AI can be used in critical ways that make lives better, and even save them. Nothing can replace my daughter or remove the pain of her passing. But one thing I can do is advocate for innovative technologies that could save someone else in a similar situation. We are using AI in other types of healthcare applications with impressive results. It’s time we directed AI to the mental health conditions that are so pervasive in our world.
About the Author: Clifton Roberts is a global director of cloud and data policy at Intel Corp., where he has worked for over nine years. He is a board member of Lakewood College and In Defense of Animals, and a volunteer with the PC Pal Program and The Humane Party. Clifton graduated from the University of California, Berkeley, with a BA in political science.