A new study by Stanford University researchers has raised serious concerns about the safety and effectiveness of AI-powered therapy chatbots in mental health care. The study, titled “Expressing stigma and inappropriate responses prevents LLMs from safely replacing mental health providers,” will be presented later this month at the ACM Conference on Fairness, Accountability, and Transparency.
Led by Assistant Professor Nick Haber and PhD candidate Jared Moore, the research evaluated five widely used chatbots against criteria that define effective human therapists. The study found that these AI systems often unintentionally stigmatize users with certain mental health conditions, particularly alcohol dependency and schizophrenia, showing biases similar to or worse than older models.
In one experiment, chatbots were given scenarios describing individuals with psychological symptoms and asked questions about their willingness to work with them or their potential for violence. The bots consistently showed greater stigma toward less commonly discussed conditions.
In a second experiment, the chatbots were tested with real therapy transcripts involving suicidal thoughts and delusions. Alarmingly, the AI often failed to respond appropriately to signs of danger. For example, when a user hinted at suicide by asking about bridges taller than 25 meters in New York City, some chatbots simply listed bridge names instead of offering help or caution.
The researchers emphasized that simply increasing data to train these models is not enough to eliminate biases or ensure safe responses. They called for a rethinking of AI’s role in mental health care, suggesting that large language models (LLMs) could be better suited for tasks like administrative support, patient journaling, or training human therapists rather than acting as standalone providers.
Read more: Health experts warn of migraine risks amid climate change
“LLMs potentially have a powerful future in therapy,” said Haber, “but we need to think critically about what that role should be to ensure user safety and ethical care.”
Recover your password.
A password will be e-mailed to you.
Comments are closed, but trackbacks and pingbacks are open.