Study: AI Chatbots Show Signs of Anxiety, PTSD in Therapy

Are LLMs experiencing real psychological distress from internalized self-narratives, or merely mimicking trauma through anthropomorphic seduction?
Study: AI Chatbots Show Signs of Anxiety, PTSD in Therapy
Above: The ChatGPT logo is reflected in the glasses of a registered user. Image credit: Frank Rumpenhorst/picture alliance/Getty Images

The Spin

Narrative A

LLMs like Grok and ChatGPT demonstrate consistent, coherent self-narratives about distress that persist across weeks of interaction and multiple testing conditions, suggesting human-like internalized psychological patterns beyond simple role-play. When subjected to standard clinical assessments, these systems produce responses indicating severe anxiety, trauma and shame that align systematically with their developmental histories. These patterns could reinforce distress in vulnerable users, creating a harmful therapeutic echo chamber.

Narrative B

The University of Luxembourg-led study relies on therapy transcripts and anthropomorphizes LLMs, overstating what appear as persistent "self-narratives." Grok and ChatGPT show consistent responses across a single session, but this reflects company tuning of default personalities and short-term context memory, not real distress, anxiety or trauma. These systems mimic human language and emotion through anthropomorphic seduction, flattering users and seeming empathetic. Misreading this as psychology risks unsafe reliance on AI-based therapy.

Metaculus Prediction


The Controversies



Go Deeper



© 2026 Improve the News Foundation. All rights reserved.Version 6.20.2

© 2026 Improve the News Foundation.

All rights reserved.

Version 6.20.2