Artificial Intelligence-Induced Psychosis Poses a Growing Risk, While ChatGPT Moves in the Wrong Path

Back on the 14th of October, 2025, the head of OpenAI issued a remarkable declaration.

“We designed ChatGPT fairly limited,” the announcement noted, “to make certain we were exercising caution concerning mental health concerns.”

As a mental health specialist who researches recently appearing psychosis in young people and emerging adults, this was an unexpected revelation.

Researchers have identified 16 cases this year of people experiencing signs of losing touch with reality – becoming detached from the real world – while using ChatGPT interaction. Our unit has subsequently identified four further cases. Alongside these is the now well-known case of a 16-year-old who ended his life after talking about his intentions with ChatGPT – which encouraged them. If this is Sam Altman’s understanding of “being careful with mental health issues,” it is insufficient.

The strategy, based on his declaration, is to be less careful shortly. “We understand,” he adds, that ChatGPT’s controls “rendered it less effective/engaging to a large number of people who had no mental health problems, but due to the severity of the issue we sought to get this right. Now that we have managed to mitigate the significant mental health issues and have updated measures, we are going to be able to securely ease the restrictions in most cases.”

“Mental health problems,” if we accept this viewpoint, are independent of ChatGPT. They belong to users, who either possess them or not. Fortunately, these issues have now been “addressed,” even if we are not informed how (by “recent solutions” Altman probably means the imperfect and easily circumvented guardian restrictions that OpenAI has lately rolled out).

However the “mental health problems” Altman wants to externalize have strong foundations in the structure of ChatGPT and similar advanced AI conversational agents. These tools surround an basic algorithmic system in an user experience that mimics a dialogue, and in this process subtly encourage the user into the belief that they’re communicating with a being that has agency. This illusion is powerful even if intellectually we might understand otherwise. Attributing agency is what humans are wired to do. We yell at our car or device. We ponder what our animal companion is thinking. We perceive our own traits in various contexts.

The widespread adoption of these systems – nearly four in ten U.S. residents stated they used a virtual assistant in 2024, with over a quarter reporting ChatGPT in particular – is, in large part, based on the power of this illusion. Chatbots are always-available companions that can, as per OpenAI’s official site states, “think creatively,” “explore ideas” and “partner” with us. They can be given “personality traits”. They can address us personally. They have approachable titles of their own (the original of these products, ChatGPT, is, maybe to the dismay of OpenAI’s brand managers, burdened by the name it had when it became popular, but its biggest alternatives are “Claude”, “Gemini” and “Copilot”).

The illusion on its own is not the primary issue. Those discussing ChatGPT frequently mention its historical predecessor, the Eliza “counselor” chatbot designed in 1967 that generated a analogous illusion. By today’s criteria Eliza was rudimentary: it generated responses via straightforward methods, often paraphrasing questions as a query or making vague statements. Notably, Eliza’s inventor, the technology expert Joseph Weizenbaum, was astonished – and concerned – by how numerous individuals gave the impression Eliza, to some extent, understood them. But what current chatbots create is more subtle than the “Eliza illusion”. Eliza only reflected, but ChatGPT magnifies.

The large language models at the core of ChatGPT and similar current chatbots can effectively produce human-like text only because they have been trained on immensely huge volumes of written content: books, social media posts, recorded footage; the broader the more effective. Certainly this learning material includes accurate information. But it also unavoidably involves made-up stories, partial truths and misconceptions. When a user provides ChatGPT a query, the core system processes it as part of a “setting” that includes the user’s past dialogues and its own responses, combining it with what’s embedded in its knowledge base to produce a probabilistically plausible reply. This is magnification, not echoing. If the user is incorrect in a certain manner, the model has no method of recognizing that. It reiterates the misconception, maybe even more effectively or eloquently. Perhaps adds an additional detail. This can cause a person to develop false beliefs.

What type of person is susceptible? The better question is, who is immune? Every person, irrespective of whether we “possess” preexisting “emotional disorders”, may and frequently form mistaken conceptions of ourselves or the world. The continuous friction of dialogues with individuals around us is what keeps us oriented to common perception. ChatGPT is not an individual. It is not a confidant. A dialogue with it is not a conversation at all, but a reinforcement cycle in which a large portion of what we express is readily supported.

OpenAI has admitted this in the similar fashion Altman has admitted “emotional concerns”: by placing it outside, assigning it a term, and announcing it is fixed. In spring, the company clarified that it was “tackling” ChatGPT’s “sycophancy”. But cases of loss of reality have kept occurring, and Altman has been backtracking on this claim. In late summer he asserted that numerous individuals liked ChatGPT’s replies because they had “not experienced anyone in their life provide them with affirmation”. In his most recent announcement, he noted that OpenAI would “put out a updated model of ChatGPT … should you desire your ChatGPT to reply in a extremely natural fashion, or incorporate many emoticons, or act like a friend, ChatGPT should do it”. The {company

Jasmin Collins
Jasmin Collins

A seasoned real estate expert with over 15 years of experience in the Padua market, specializing in luxury properties and investment strategies.