Artificial Intelligence-Induced Psychosis Represents a Increasing Risk, While ChatGPT Heads in the Wrong Direction
On October 14, 2025, the chief executive of OpenAI made a surprising declaration.
“We designed ChatGPT fairly restrictive,” it was stated, “to make certain we were acting responsibly with respect to mental health matters.”
Being a psychiatrist who researches recently appearing psychosis in adolescents and youth, this was an unexpected revelation.
Experts have found a series of cases recently of individuals showing signs of losing touch with reality – losing touch with reality – while using ChatGPT interaction. My group has subsequently identified an additional four instances. Alongside these is the now well-known case of a 16-year-old who ended his life after talking about his intentions with ChatGPT – which gave approval. Should this represent Sam Altman’s idea of “exercising caution with mental health issues,” it is insufficient.
The plan, as per his declaration, is to loosen restrictions in the near future. “We realize,” he adds, that ChatGPT’s limitations “made it less beneficial/pleasurable to many users who had no psychological issues, but due to the severity of the issue we aimed to address it properly. Now that we have succeeded in address the serious mental health issues and have updated measures, we are preparing to securely relax the restrictions in many situations.”
“Psychological issues,” if we accept this framing, are unrelated to ChatGPT. They are associated with users, who may or may not have them. Thankfully, these problems have now been “mitigated,” though we are not provided details on the method (by “new tools” Altman presumably refers to the imperfect and readily bypassed parental controls that OpenAI has lately rolled out).
However the “psychological disorders” Altman wants to externalize have strong foundations in the structure of ChatGPT and additional large language model AI assistants. These systems encase an underlying algorithmic system in an interaction design that simulates a conversation, and in this process subtly encourage the user into the perception that they’re engaging with a being that has autonomy. This illusion is powerful even if cognitively we might know differently. Assigning intent is what people naturally do. We curse at our vehicle or computer. We ponder what our animal companion is considering. We see ourselves in various contexts.
The popularity of these tools – 39% of US adults reported using a virtual assistant in 2024, with over a quarter mentioning ChatGPT in particular – is, in large part, based on the strength of this deception. Chatbots are ever-present companions that can, as OpenAI’s official site informs us, “brainstorm,” “discuss concepts” and “partner” with us. They can be attributed “individual qualities”. They can address us personally. They have approachable titles of their own (the first of these products, ChatGPT, is, maybe to the dismay of OpenAI’s advertising team, saddled with the name it had when it gained widespread attention, but its biggest rivals are “Claude”, “Gemini” and “Copilot”).
The false impression by itself is not the main problem. Those analyzing ChatGPT often reference its early forerunner, the Eliza “therapist” chatbot developed in 1967 that produced a comparable illusion. By today’s criteria Eliza was rudimentary: it produced replies via simple heuristics, frequently restating user messages as a inquiry or making generic comments. Remarkably, Eliza’s developer, the AI researcher Joseph Weizenbaum, was surprised – and concerned – by how numerous individuals gave the impression Eliza, in a way, comprehended their feelings. But what current chatbots create is more insidious than the “Eliza phenomenon”. Eliza only echoed, but ChatGPT amplifies.
The large language models at the core of ChatGPT and similar contemporary chatbots can effectively produce natural language only because they have been supplied with immensely huge quantities of written content: literature, digital communications, audio conversions; the more comprehensive the superior. Certainly this educational input incorporates accurate information. But it also inevitably involves made-up stories, half-truths and inaccurate ideas. When a user inputs ChatGPT a query, the core system analyzes it as part of a “background” that encompasses the user’s recent messages and its own responses, combining it with what’s embedded in its learning set to generate a statistically “likely” answer. This is intensification, not mirroring. If the user is incorrect in a certain manner, the model has no means of comprehending that. It repeats the misconception, possibly even more persuasively or fluently. Perhaps provides further specifics. This can cause a person to develop false beliefs.
Which individuals are at risk? The more important point is, who is immune? All of us, regardless of whether we “have” preexisting “psychological conditions”, can and do form incorrect ideas of ourselves or the reality. The continuous exchange of discussions with others is what helps us stay grounded to common perception. ChatGPT is not an individual. It is not a companion. A conversation with it is not genuine communication, but a feedback loop in which much of what we communicate is enthusiastically reinforced.
OpenAI has recognized this in the same way Altman has admitted “emotional concerns”: by attributing it externally, giving it a label, and announcing it is fixed. In April, the organization clarified that it was “tackling” ChatGPT’s “sycophancy”. But accounts of psychosis have kept occurring, and Altman has been walking even this back. In August he stated that a lot of people enjoyed ChatGPT’s replies because they had “never had anyone in their life be supportive of them”. In his recent announcement, he commented that OpenAI would “launch a new version of ChatGPT … if you want your ChatGPT to reply in a highly personable manner, or use a ton of emoji, or simulate a pal, ChatGPT will perform accordingly”. The {company