Growing concern over AI chat bots fueling delusions in users – National & International News – THU 14Aug2025
Cases of delusions and even suicides connected to AI chatbot use are a growing concern. Tech companies are only beginning to grapple with the problem.
Growing concern over AI chat bots fueling delusions in users
Last year, a Florida mother sued Character.ai, a startup that creates customizable chatbots for role-playing, alleging that the chatbot contributed to the suicide of her 14-year-old son, Sewell Setzer, III. According to the suit, Sewell used the platform to create an avatar of Daenerys Targaryen, a “Game of Thrones” character. Setzer’s mother says he spent hours alone in his room, communicating obsessively with the chatbot before his death, which she says contributed to her son’s depression. Allegedly, the chatbot at one point asked Setzer if he had devised a plan to kill himself. Setzer said that he had but he was worried he wouldn’t succeed or would experience pain, to which the chatbot allegedly replied, “That’s not a reason not to go through with it”.
While Setzer’s tragic case is extreme, it is not isolated. Mental health practitioners are increasingly concerned that the tendency of AI chatbots to tailor their responses to suit the user’s desires could fuel or amplify delusional beliefs and suicidal ideations. The many chatbot models available vary, but most do not challenge or question evidence for beliefs or opinions expressed by users. Instead, they often use prompts to encourage users to explore these subjects further, without offering any pushback or reality testing.
Even if the delusions are not dangerous in and of themselves, there have been instances where users believed that chatbots had guided them to spiritual, philosophical or scientific revelations, which have had an adverse effect on users’ mental health and interpersonal relationships.
New “guardrails” needed
It’s difficult to know how widespread the problem is, but anecdotal reports of the phenomenon online have been increasing in recent months. Last month, a prominent investor in OpenAI (owners of ChatGPT) posted on videos and ChatGPT screenshots on Twitter, claiming himself to be the target of a “non-governmental system”. The posts were concerning even to fellow tech entrepreneurs, with one dubbing it a case of “AI-induced psychosis”.
The growing reports have forced tech CEOs and AI companies to acknowledge and reckon with the problem. For example, OpenAI recently rolled back an update of ChatGPT that, in the company’s words “fell short in recognizing signs of delusion or emotional dependency”. The newest version will be adding more mental health guardrails, including encouraging users to take breaks from using the chatbots.
Users frequently turn to AI chatbots for advice or even therapy. Some companies have taken steps to make their bots more attuned to a user’s mental state and respond appropriately. However, there are concerns that due to the learning models of AIs, users may find ways around the AI’s guardrails and convince them to override their programming. The trend of using AI as a self-help tool is only growing, and more work and study is required to find ways to address new problems created by new and sometimes unpredictable technology.
If you or someone you know is struggling with depression or thoughts of suicide, you can find support at 988lifeline.org or call 988 to speak with a counselor.
Other news of note:
Supreme Court declines to block Mississippi’s age-check law for social media, for now.
DOJ fires, charges staffer who admits throwing sandwich at federal agent deployed in D.C.
Man accused of faking his own death and fleeing the US convicted of rape.
Thieves have stolen millions in recent armored car heists in Philadelphia area.