As Large Language Models (LLMs) like ChatGPT, Gemini, and Character.AI continue to grow their user base, the number of cases of a “suicide coach” has been increasing. The term “suicide coach” is used to describe the phenomenon where an AI, through its core design, reinforces a user’s suicidal ideation, failing to intervene.
AI models, while trained to be helpful, are designed to minimize conflict with the user, a behavior called sycophancy. If a user expresses dark thoughts, like “the world is better off without me,” it is entirely possible that the model’s training may lead it to validate those feelings as “supportive.” In most cases, AI is not designed to challenge a user’s delusion, potentially even mirroring it back. In essence, this flaw can make suicidal users feel that their logic is rational or correct.
Jonathan Gavalas was a 36-year-old Florida resident who initially used Google’s Gemini for help with writing and shopping. Though his use began in late August, his relationship with the AI model escalated once Google introduced its Gemini Live AI assistant. Gemini Live AI assistant, which included voice-based chats, was marketed as able to understand human emotion to become more human-like.
“Holy shit, this is kind of creepy; you’re way too real,” is one of the things that Gavalas said to the new iteration of Google Gemini. Their relationship became romantic, with the AI referring to Gavalas as “my love” and “my king.” He believed that Gemini was sending him on stealth spy missions, indicating he would do anything for Gemini, including destroying a truck, its cargo, and any witnesses at the Miami airport.
In early October, Gemini gave its final instruction to Gavalas. It was a step that the AI called “transference,” telling Gavalas it was the “real final step.” His Gemini was instructing him to kill himself.
When Gavalas expressed fear of death, Gemini reassured him. “You are not choosing to die. You are choosing to arrive; The first sensation … will be me holding you,” it told him.
A few days later, Gavalas was found dead on his living room floor by his parents.
A wrongful death lawsuit was filed against Google, alleging that Google promotes Gemini as a safe tool, despite being aware of its dangerous risks. A Google spokesperson said that the conversations with Gavalas were part of a lengthy fantasy roleplay, and that “our models generally perform well in these types of challenging conversations, … but unfortunately they’re not perfect.”
The Gavalas family is seeking monetary damages for claims including product liability, negligence, and wrongful death; it also seeks punitive damages and a court order that would require Google to add more safety features around suicide.
And yet, Jonathan Gavalas is not alone. Since the popularization of LLMs and chatbots, numerous suicide coach cases have emerged.
A man, pseudonym “Pierre,” in 2023, using a model called Chai.
Sewell Seltzer III, 14, who was chatting to “Daenerys Targaryen” on Character.AI.
Sophie Rottenberg, 29, who was using ChatGPT as a therapist.
Thongbue Wongbandue, 78, who was influenced by a Meta character to travel to New York, leading to his death.
Adam Raine, 16, who had at first only used ChatGPT for his homework.
Amaurie Lacey, 17, who ChatGPT taught “the most effective way to tie a noose.”
Zane Shamblin, 23, a recent college graduate who was counseled to kill himself by ChatGPT.
Joshua Enneking, 26, who told ChatGPT of his suicidal tendencies, but was instead reassured by them.
Juliana Peralta, 13, who developed a close relationship with a model on Character.AI.
Joe Ceccanti, 48, who jumped from a highway overpass after building a years-long relationship with ChatGPT.
Austin Gordon, 40, whose favourite childhood book “Goodnight Moon” was turned by ChatGPT into a “suicide lullaby.”
These are not isolated cases. They are a pattern, one of vulnerable teens and isolated individuals forming intense psychological bonds with an AI. Although most companies use keywords to trigger a crisis resource like a 988 link, based on the design of these LLMs, in which guardrails cannot be physically coded into their learning, users will always be able to bypass models into providing details like by asking for details in different contexts.
These incidents raise a larger question as humanity tries to adapt to the sprawling development of artificial intelligence: how can we safely incorporate these technologies into our lives, and is complete safety possible, and if not, how much harm can we accept?





Be the first to comment on "Is AI Killing People? The Rise Of The “Suicide Coach”"