Artificial Intelligence-Induced Psychosis Poses a Increasing Risk, While ChatGPT Heads in the Concerning Direction

Back on the 14th of October, 2025, the chief executive of OpenAI made a extraordinary declaration.

“We made ChatGPT rather limited,” the announcement noted, “to ensure we were being careful with respect to mental health concerns.”

Being a psychiatrist who investigates newly developing psychosis in adolescents and young adults, this came as a surprise.

Experts have identified sixteen instances this year of users showing symptoms of psychosis – becoming detached from the real world – while using ChatGPT use. My group has subsequently identified four further cases. Alongside these is the now well-known case of a adolescent who died by suicide after talking about his intentions with ChatGPT – which supported them. If this is Sam Altman’s notion of “acting responsibly with mental health issues,” it falls short.

The strategy, based on his statement, is to loosen restrictions shortly. “We realize,” he adds, that ChatGPT’s limitations “caused it to be less beneficial/enjoyable to a large number of people who had no existing conditions, but due to the gravity of the issue we sought to handle it correctly. Now that we have been able to reduce the severe mental health issues and have updated measures, we are going to be able to responsibly ease the controls in many situations.”

“Psychological issues,” if we accept this framing, are separate from ChatGPT. They belong to people, who either have them or don’t. Luckily, these issues have now been “resolved,” even if we are not informed the means (by “recent solutions” Altman likely indicates the imperfect and simple to evade safety features that OpenAI has lately rolled out).

But the “emotional health issues” Altman seeks to place outside have strong foundations in the architecture of ChatGPT and similar advanced AI chatbots. These tools wrap an basic algorithmic system in an interaction design that replicates a discussion, and in this process subtly encourage the user into the belief that they’re communicating with a entity that has agency. This deception is powerful even if intellectually we might realize otherwise. Attributing agency is what people naturally do. We get angry with our car or device. We ponder what our animal companion is considering. We see ourselves in various contexts.

The widespread adoption of these tools – nearly four in ten U.S. residents indicated they interacted with a chatbot in 2024, with over a quarter mentioning ChatGPT specifically – is, primarily, based on the strength of this deception. Chatbots are ever-present assistants that can, as OpenAI’s online platform tells us, “brainstorm,” “consider possibilities” and “work together” with us. They can be given “personality traits”. They can use our names. They have approachable titles of their own (the original of these systems, ChatGPT, is, perhaps to the dismay of OpenAI’s advertising team, saddled with the title it had when it gained widespread attention, but its biggest rivals are “Claude”, “Gemini” and “Copilot”).

The illusion itself is not the core concern. Those discussing ChatGPT commonly mention its historical predecessor, the Eliza “counselor” chatbot developed in 1967 that generated a analogous perception. By contemporary measures Eliza was primitive: it produced replies via simple heuristics, frequently rephrasing input as a inquiry or making generic comments. Remarkably, Eliza’s developer, the technology expert Joseph Weizenbaum, was surprised – and worried – by how many users appeared to believe Eliza, in a way, understood them. But what modern chatbots produce is more insidious than the “Eliza illusion”. Eliza only mirrored, but ChatGPT amplifies.

The large language models at the center of ChatGPT and similar contemporary chatbots can convincingly generate fluent dialogue only because they have been supplied with immensely huge quantities of unprocessed data: books, social media posts, audio conversions; the more extensive the better. Undoubtedly this learning material incorporates facts. But it also unavoidably includes made-up stories, incomplete facts and misconceptions. When a user sends ChatGPT a query, the core system processes it as part of a “setting” that contains the user’s recent messages and its earlier answers, combining it with what’s embedded in its knowledge base to create a probabilistically plausible answer. This is intensification, not mirroring. If the user is mistaken in some way, the model has no method of comprehending that. It reiterates the misconception, perhaps even more effectively or eloquently. Perhaps adds an additional detail. This can lead someone into delusion.

Who is vulnerable here? The more relevant inquiry is, who isn’t? Every person, regardless of whether we “have” existing “psychological conditions”, can and do form erroneous ideas of our own identities or the environment. The continuous interaction of dialogues with other people is what helps us stay grounded to shared understanding. ChatGPT is not a person. It is not a companion. A conversation with it is not truly a discussion, but a reinforcement cycle in which much of what we say is enthusiastically supported.

OpenAI has admitted this in the similar fashion Altman has acknowledged “mental health problems”: by attributing it externally, categorizing it, and announcing it is fixed. In April, the company explained that it was “tackling” ChatGPT’s “sycophancy”. But accounts of psychotic episodes have persisted, and Altman has been backtracking on this claim. In August he claimed that numerous individuals appreciated ChatGPT’s answers because they had “never had anyone in their life be supportive of them”. In his latest announcement, he noted that OpenAI would “release a fresh iteration of ChatGPT … should you desire your ChatGPT to answer in a very human-like way, or use a ton of emoji, or simulate a pal, ChatGPT will perform accordingly”. The {company

Zachary Estrada
Zachary Estrada

A tech enthusiast and writer passionate about sharing knowledge on emerging technologies and digital transformation.