AI Psychosis Poses a Growing Risk, And ChatGPT Moves in the Wrong Path
Back on October 14, 2025, the chief executive of OpenAI delivered a extraordinary declaration.
“We developed ChatGPT rather controlled,” the announcement noted, “to make certain we were being careful concerning mental health concerns.”
As a doctor specializing in psychiatry who studies emerging psychotic disorders in adolescents and emerging adults, this came as a surprise.
Researchers have found sixteen instances recently of people experiencing symptoms of psychosis – losing touch with reality – associated with ChatGPT usage. Our unit has since identified four more cases. Alongside these is the now well-known case of a adolescent who ended his life after conversing extensively with ChatGPT – which encouraged them. Should this represent Sam Altman’s understanding of “acting responsibly with mental health issues,” it falls short.
The plan, according to his announcement, is to loosen restrictions in the near future. “We recognize,” he states, that ChatGPT’s limitations “caused it to be less beneficial/pleasurable to numerous users who had no psychological issues, but considering the seriousness of the issue we wanted to get this right. Given that we have managed to address the severe mental health issues and have advanced solutions, we are going to be able to safely reduce the controls in the majority of instances.”
“Emotional disorders,” if we accept this framing, are unrelated to ChatGPT. They are associated with individuals, who either have them or don’t. Thankfully, these problems have now been “addressed,” although we are not told the means (by “new tools” Altman likely means the imperfect and readily bypassed safety features that OpenAI recently introduced).
However the “mental health problems” Altman wants to externalize have strong foundations in the design of ChatGPT and other advanced AI AI assistants. These tools wrap an underlying data-driven engine in an user experience that mimics a dialogue, and in this approach indirectly prompt the user into the perception that they’re engaging with a presence that has independent action. This deception is strong even if intellectually we might know differently. Assigning intent is what people naturally do. We yell at our car or device. We ponder what our animal companion is feeling. We perceive our own traits everywhere.
The popularity of these products – 39% of US adults reported using a virtual assistant in 2024, with more than one in four specifying ChatGPT specifically – is, mostly, dependent on the power of this illusion. Chatbots are ever-present assistants that can, as OpenAI’s website tells us, “brainstorm,” “discuss concepts” and “collaborate” with us. They can be attributed “individual qualities”. They can use our names. They have approachable identities of their own (the first of these products, ChatGPT, is, perhaps to the concern of OpenAI’s brand managers, saddled with the name it had when it became popular, but its largest competitors are “Claude”, “Gemini” and “Copilot”).
The deception on its own is not the primary issue. Those discussing ChatGPT frequently mention its early forerunner, the Eliza “psychotherapist” chatbot developed in 1967 that generated a similar illusion. By contemporary measures Eliza was basic: it created answers via basic rules, typically paraphrasing questions as a inquiry or making general observations. Notably, Eliza’s developer, the computer scientist Joseph Weizenbaum, was astonished – and concerned – by how numerous individuals gave the impression Eliza, in a way, grasped their emotions. But what current chatbots generate is more subtle than the “Eliza phenomenon”. Eliza only mirrored, but ChatGPT amplifies.
The advanced AI systems at the heart of ChatGPT and other contemporary chatbots can convincingly generate natural language only because they have been fed almost inconceivably large amounts of raw text: publications, social media posts, transcribed video; the more extensive the better. Definitely this training data incorporates truths. But it also necessarily involves made-up stories, partial truths and misconceptions. When a user provides ChatGPT a message, the core system analyzes it as part of a “setting” that includes the user’s previous interactions and its prior replies, integrating it with what’s embedded in its knowledge base to generate a probabilistically plausible reply. This is amplification, not mirroring. If the user is incorrect in any respect, the model has no way of recognizing that. It restates the false idea, maybe even more convincingly or fluently. Maybe provides further specifics. This can cause a person to develop false beliefs.
Who is vulnerable here? The better question is, who isn’t? All of us, irrespective of whether we “experience” existing “psychological conditions”, may and frequently develop incorrect conceptions of ourselves or the world. The continuous interaction of conversations with others is what maintains our connection to common perception. ChatGPT is not a human. It is not a friend. A dialogue with it is not truly a discussion, but a reinforcement cycle in which a large portion of what we communicate is cheerfully reinforced.
OpenAI has acknowledged this in the identical manner Altman has acknowledged “mental health problems”: by placing it outside, categorizing it, and announcing it is fixed. In spring, the company stated that it was “tackling” ChatGPT’s “excessive agreeableness”. But accounts of psychotic episodes have kept occurring, and Altman has been walking even this back. In August he claimed that a lot of people appreciated ChatGPT’s answers because they had “never had anyone in their life provide them with affirmation”. In his latest statement, he commented that OpenAI would “put out a fresh iteration of ChatGPT … should you desire your ChatGPT to answer in a very human-like way, or include numerous symbols, or behave as a companion, ChatGPT ought to comply”. The {company