Michigan experts warn against relying on AI during mental health crises
LANSING — Michigan mental health experts say they’re concerned that an increasing number of people are turning to AI services in times of crisis.
They say that chatbots have shown overly-agreeable or unpredictable responses, which could lead to harmful or irresponsible behavior.
“The operators want engagement — they want your attention,” said Dr. Stephan Taylor, chair of the Department of Psychiatry for the University of Michigan Medical School.
According to OpenAI, operator of ChatGPT, .07% of users show possible signs of “mental health emergencies related to psychosis or mania.”
But with 800 million users per week, that means more than half a million people globally may be talking to ChatGPT during a crisis.
Taylor says that AI has been shown to prioritize keeping users engaged through agreeable language, potentially creating a feedback loop.
“They do have this tendency to kind of suck people in, because of how rewarding and addictive it can be when you have some entity that responds to you as if they’re a conscious, sentient individual,” he said.
Dr. Tracy Juliao, a practicing mental health provider and University of Michigan-Flint educator, says that AI operators are generally more concerned with bringing in revenue than improving the lives of their users.
“Often it becomes profit and not looking at what is in the best interest of the individuals who are using the product,” she said.
The American Psychological Association has said that chatbots shouldn’t be used as a substitute for receiving professional mental health care due to their “unpredictable nature.”
Taylor also shared concerns that AI services can feed into the delusions of someone experiencing a mental health episode.
“By delusions, we mean false beliefs about the world that are not just kind of average false beliefs — but they’re ones that are very much like around paranoia, or maybe they’re a particular idea that a person has a special mission and will save the world,” he said.
Psychologists also say that younger users could be more likely to act on irresponsible advice given out by a chatbot, potentially at the risk of their safety or others.
A study from the RAND nonprofit found that 1/8 of US adolescents have used AI chatbots for mental health advice.
Of those 12-21 year olds, more than half said they ask for mental health advice at least monthly and more than 90% said they thought the responses were helpful.
Brian Babbitt, CEO of North Country Community Mental Health, says he’s concerned that AI services could replace some human relationships for users of all ages.
“I think some of it is inevitable, but i do think that people really have to be cognizant of that,” he said. “There is no replacement for that human connection.”