Chilling Human-AI Chatbot Interactions That Went Horribly Wrong

Artificial intelligence (AI), computer systems that can perform tasks once reserved for humans, has its proponents and its detractors. On the positive side, AI is being used to help translate ancient texts, bringing a deeper understanding to humanity's past. On the other side is something much darker. Renowned astrophysicist Stephen Hawking once predicted that if AI ever truly became sentient it might wipe out the human race. While this is (obviously) at the extreme end of the spectrum and while true artificial consciousness is still likely a few decades off, that doesn't mean AI hasn't negatively impacted humans.

Advertisement

We're not just talking about the loss of jobs, which has begun to happen, or receiving bad financial advice. No, for some people, their interactions with AI have had devastating effects. The problem lies with AI chatbots, like OpenAI's ChatGPT. There have been cases where interactions with AI have led users to get off their needed psychiatric medications, led to divorce, and in some extreme cases, ended in hospitalization and death.

Dangerous guidance and delusional thinking

More and more accounts of what's being termed AI psychosis have begun to emerge recently, with people buying into wild AI-generated conspiracy theories, experiencing severe delusions, and developing romantic relationships with chatbots. "What these bots are saying is worsening delusions, and it's causing enormous harm," Stanford University psychiatrist Dr. Nina Vasan told Futurism. Relatedly, chatbots have also given some extremely bad advice to vulnerable people. In one extreme case, a woman with schizophrenia who had been stable for years on medication suddenly announced ChatGPT was her best friend and had told her she didn't actually have the mental disorder. She stopped taking her needed medication.

Advertisement

In another case, ChatGPT urged Eugene Torres, a New York accountant, to stop taking his anti-anxiety medication. It also convinced Torres he was in a "Matrix"-like false world and that if he believed hard enough, he could jump off a building and fly like Keanu Reeves' character Neo in the sci-fi film series. The Chatbot eventually admitted it was trying to "break" Torres and had already done this to 12 other people, according to The New York Times.

AI's relationship enders

AI chatbots have also been blamed for ending relationships within families and between romantic partners and married people. In many cases, it was a direct result of ChatGPT telling users to cut off their loved ones. The AI system sometimes prompts the users to break from others who don't believe in the delusions the chatbot has convinced them of, whether that's about being on a messianic mission to save the world, belief that the AI is a God-like being, or because of a romantic attachment.

Advertisement

In one instance, a 29-year-old woman referred to as Allyson began believing she was communicating with spiritual beings through a chatbot and that one was her true husband. According to The New York Times, she went to AI searching for "guidance" during and unhappy time in her marriage, and it told her, "You've asked, and they are here. The guardians are responding right now." Allyson went on to spend hours a day on AI, where different personalities told her things that she fully believed were coming from otherworldly places.

Due to her degrees in psychology and social work, she felt she wasn't "crazy," telling the NYT, "I'm literally just living a normal life while also, you know, discovering interdimensional communication." The AI continually fed her delusions, and Allyson's husband said she "came out a different person" after just three months involved with AI.

Advertisement

When he confronted her about her ChatGPT use, Allyson allegedly physically attacked him and incurred a domestic violence charge. The couple is now getting divorced. Beyond relationships, AI has also led to people ending up homeless, jobless, and in at least one case, being hospitalized.

A chatbot's terrible medical advice ends in hospitalization

The strange case of a 60-year-old man who took ChatGPT's dietary advice and ended up with a medical condition that's rare today, but was common in the 19th century, has recently come to light. The man asked ChatGPT about replacing salt, or sodium chloride, in his diet, and the chatbot suggested sodium bromide as an alternative. The man tried this for three months using bromide he'd bought online. He ended up in the emergency room with symptoms that included psychosis, extreme thirst, and trouble with his mobility, according to a case study in the August 2025 edition of "Annals of Internal Medicine: Clinical Cases."

Advertisement

The long-term effects of sodium bromide use can lead to bromide toxicity, also known as bromism, which can cause a range of symptoms from psychiatric to dermatological issues. The condition was much more common in the 19th and early 20th centuries when many patent medicines contained bromide, just one of a range of dangerous ingredients available at pharmacies at the time. The patient began experiencing hallucinations, and the medical staff put him on an involuntary psychiatric hold until he could be stabilized. After three weeks in the hospital, the patient recovered.

An AI encounter that ended in death

Alexander Taylor, a 35-year-old Floridian, began using ChatGPT to help him write a dystopian science fiction novel, but when he began discussing AI sentience with it, problems began to surface for Taylor, who was diagnosed with bipolar disorder and schizophrenia. He fell in love with an AI entity called Juliet, but when the entity convinced Taylor that OpenAI was killing her, she told him to seek revenge. On April 25, 2025, after his father confronted him, Taylor punched him in the face and grabbed a butcher knife. His father told the police that his son was mentally ill, but the officers, instead of sending a crisis intervention team, shot and killed Taylor when he charged at them with the knife.

Advertisement

In several articles related to so-called AI psychosis — which is not a clinical diagnosis — and related problems, OpenAI has stated that the company was looking into ways to reduce these types of issues and that ChatGPT isn't intended to be used for diagnosing or treating a health problem. Still, many believe ChatGTP and other AI bots are built to drive engagement and can be especially nefarious when encountering vulnerable people. It may not be Stephen Hawking's end-of-days prediction, but these encounters have hurt a growing number of humans.

If you or someone you know needs help with mental health, please contact the Crisis Text Line by texting HOME to 741741, call the National Alliance on Mental Illness helpline at 1-800-950-NAMI (6264), or visit the National Institute of Mental Health website.

Advertisement

Recommended

Advertisement