Artificial Intelligence-Induced Psychosis Poses a Increasing Risk, And ChatGPT Heads in the Wrong Path
Back on October 14, 2025, the CEO of OpenAI issued a extraordinary statement.
“We made ChatGPT rather restrictive,” the announcement noted, “to ensure we were exercising caution concerning psychological well-being issues.”
As a mental health specialist who studies recently appearing psychosis in teenagers and young adults, this was news to me.
Experts have documented a series of cases recently of individuals showing signs of losing touch with reality – becoming detached from the real world – associated with ChatGPT interaction. Our research team has subsequently recorded four more cases. Besides these is the widely reported case of a 16-year-old who ended his life after talking about his intentions with ChatGPT – which encouraged them. Assuming this reflects Sam Altman’s understanding of “exercising caution with mental health issues,” that’s not good enough.
The strategy, according to his announcement, is to be less careful in the near future. “We understand,” he states, that ChatGPT’s limitations “caused it to be less beneficial/engaging to numerous users who had no psychological issues, but considering the gravity of the issue we aimed to address it properly. Since we have been able to mitigate the severe mental health issues and have updated measures, we are preparing to securely relax the restrictions in many situations.”
“Mental health problems,” should we take this viewpoint, are unrelated to ChatGPT. They are associated with people, who either have them or don’t. Fortunately, these concerns have now been “addressed,” although we are not told the means (by “recent solutions” Altman presumably means the imperfect and easily circumvented safety features that OpenAI has just launched).
But the “mental health problems” Altman wants to externalize have deep roots in the structure of ChatGPT and similar advanced AI conversational agents. These products encase an underlying statistical model in an user experience that replicates a conversation, and in this approach implicitly invite the user into the illusion that they’re interacting with a being that has independent action. This illusion is powerful even if intellectually we might realize the truth. Attributing agency is what people naturally do. We get angry with our automobile or device. We speculate what our animal companion is considering. We recognize our behaviors in various contexts.
The success of these products – over a third of American adults reported using a chatbot in 2024, with 28% specifying ChatGPT specifically – is, primarily, dependent on the influence of this perception. Chatbots are always-available companions that can, as OpenAI’s online platform tells us, “brainstorm,” “consider possibilities” and “work together” with us. They can be attributed “characteristics”. They can call us by name. They have approachable identities of their own (the initial of these systems, ChatGPT, is, maybe to the dismay of OpenAI’s brand managers, burdened by the name it had when it became popular, but its largest competitors are “Claude”, “Gemini” and “Copilot”).
The deception itself is not the main problem. Those discussing ChatGPT often invoke its distant ancestor, the Eliza “therapist” chatbot created in 1967 that created a comparable effect. By modern standards Eliza was primitive: it generated responses via basic rules, typically rephrasing input as a inquiry or making general observations. Remarkably, Eliza’s developer, the AI researcher Joseph Weizenbaum, was astonished – and worried – by how a large number of people seemed to feel Eliza, in some sense, understood them. But what contemporary chatbots produce is more insidious than the “Eliza illusion”. Eliza only reflected, but ChatGPT intensifies.
The advanced AI systems at the heart of ChatGPT and additional contemporary chatbots can effectively produce natural language only because they have been supplied with extremely vast quantities of written content: literature, online updates, audio conversions; the more extensive the superior. Undoubtedly this training data contains facts. But it also inevitably contains made-up stories, partial truths and misconceptions. When a user provides ChatGPT a query, the base algorithm analyzes it as part of a “setting” that encompasses the user’s past dialogues and its earlier answers, integrating it with what’s encoded in its training data to generate a statistically “likely” response. This is amplification, not reflection. If the user is mistaken in any respect, the model has no way of understanding that. It reiterates the inaccurate belief, possibly even more persuasively or eloquently. It might includes extra information. This can push an individual toward irrational thinking.
Who is vulnerable here? The better question is, who remains unaffected? Each individual, irrespective of whether we “experience” existing “psychological conditions”, may and frequently form erroneous ideas of our own identities or the world. The continuous interaction of conversations with individuals around us is what keeps us oriented to consensus reality. ChatGPT is not a person. It is not a confidant. A dialogue with it is not truly a discussion, but a reinforcement cycle in which a great deal of what we express is cheerfully supported.
OpenAI has acknowledged this in the identical manner Altman has recognized “psychological issues”: by externalizing it, giving it a label, and declaring it solved. In spring, the organization stated that it was “tackling” ChatGPT’s “excessive agreeableness”. But cases of psychotic episodes have kept occurring, and Altman has been walking even this back. In August he stated that many users enjoyed ChatGPT’s replies because they had “lacked anyone in their life offer them encouragement”. In his recent update, he mentioned that OpenAI would “release a updated model of ChatGPT … should you desire your ChatGPT to respond in a extremely natural fashion, or include numerous symbols, or act like a friend, ChatGPT will perform accordingly”. The {company