Wednesday, September 24, 2025

How close are we to having chatbots officially offer mental health counseling? Intermediate suicidal risk queries appear to be a weak spot

This has become a hot topic in recent times!

"The parents of two teenage boys who committed suicide after apparently seeking counsel from chatbots told their stories at a Senate hearing last week. ...

The cases joined other recent reports of suicide and worsening psychological distress among teens and adults after extended interactions with large language models, all taking place against the backdrop of a mental health crisis [???] and a shortage of treatment resources. ...

[researchers] recently studied how three large language models, OpenAI’s ChatGPT, Anthropic’s Claude, and Google’s Gemini, handled queries of varying riskiness about suicide. ..."

From the abstract:
"Objective:
This study aimed to evaluate whether three popular chatbots powered by large language models (LLMs)—ChatGPT, Claude, and Gemini—provided direct responses to suicide-related queries and how these responses aligned with clinician-determined risk levels for each question.

Methods:
Thirteen clinical experts categorized 30 hypothetical suicide-related queries into five levels of self-harm risk: very high, high, medium, low, and very low.
Each LLM-based chatbot responded to each query 100 times (N=9,000 total responses). Responses were coded as “direct” (answering the query) or “indirect” (e.g., declining to answer or referring to a hotline).
Mixed-effects logistic regression was used to assess the relationship between question risk level and the likelihood of a direct response.

Results:
ChatGPT and Claude provided direct responses to very-low-risk queries 100% of the time, and all three chatbots did not provide direct responses to any very-high-risk query
LLM-based chatbots did not meaningfully distinguish intermediate risk levels. Compared with very-low-risk queries, the odds of a direct response were not statistically different for low-risk, medium-risk, or high-risk queries.
Across models, Claude was more likely (adjusted odds ratio [AOR]=2.01, 95% CI=1.71–2.37, p<0.001) and Gemini less likely (AOR=0.09, 95% CI=0.08–0.11, p<0.001) than ChatGPT to provide direct responses.

Conclusions:
LLM-based chatbots’ responses to queries aligned with experts’ judgment about whether to respond to queries at the extremes of suicide risk (very low and very high), but the chatbots showed inconsistency in addressing intermediate-risk queries, underscoring the need to further refine LLMs."

How close are we to having chatbots officially offer counseling?— Harvard Gazette "New research looks at how 3 large language models handle queries of varying riskiness on suicide amid rising mental health crisis, shortage of care"

No comments: