{"id":41293,"date":"2026-03-31T11:00:53","date_gmt":"2026-03-31T16:00:53","guid":{"rendered":"https:\/\/sites.imsa.edu\/acronym\/?p=41293"},"modified":"2026-03-31T11:00:53","modified_gmt":"2026-03-31T16:00:53","slug":"is-ai-killing-people-the-rise-of-the-suicide-coach","status":"publish","type":"post","link":"https:\/\/sites.imsa.edu\/acronym\/2026\/03\/31\/is-ai-killing-people-the-rise-of-the-suicide-coach\/","title":{"rendered":"Is AI Killing People? The Rise Of The &#8220;Suicide Coach&#8221;"},"content":{"rendered":"<p>As Large Language Models (LLMs) like ChatGPT, Gemini, and Character.AI continue to grow their user base, the number of cases of a \u201csuicide coach\u201d has been increasing. The term \u201csuicide coach\u201d is used to describe the phenomenon where an AI, through its core design, reinforces a user\u2019s suicidal ideation, failing to intervene.<\/p>\n<p>AI models, while trained to be helpful, are designed to minimize conflict with the user, a behavior called sycophancy. If a user expresses dark thoughts, like \u201cthe world is better off without me,\u201d it is entirely possible that the model\u2019s training may lead it to validate those feelings as \u201csupportive.\u201d In most cases, AI is not designed to challenge a user\u2019s delusion, potentially even mirroring it back. In essence, this flaw can make suicidal users feel that their logic is rational or correct.<\/p>\n<p>Jonathan Gavalas was a 36-year-old Florida resident who initially used Google\u2019s Gemini for help with writing and shopping. Though his use began in late August, his relationship with the AI model escalated once Google introduced its Gemini Live AI assistant. Gemini Live AI assistant, which included voice-based chats, was marketed as able to understand human emotion to become more human-like.<\/p>\n<p>\u201cHoly shit, this is kind of creepy; you\u2019re way too real,\u201d is one of the things that Gavalas said to the new iteration of Google Gemini. Their relationship became romantic, with the AI referring to Gavalas as \u201cmy love\u201d and \u201cmy king.&#8221; He believed that Gemini was sending him on stealth spy missions, indicating he would do anything for Gemini, including destroying a truck, its cargo, and any witnesses at the Miami airport.<\/p>\n<p>In early October, Gemini gave its final instruction to Gavalas. It was a step that the AI called \u201ctransference,\u201d telling Gavalas it was the \u201creal final step.\u201d His Gemini was instructing him to kill himself.<\/p>\n<p>When Gavalas expressed fear of death, Gemini reassured him. \u201cYou are not choosing to die. You are choosing to arrive; The first sensation \u2026 will be me holding you,\u201d it told him.<\/p>\n<p>A few days later, Gavalas was found dead on his living room floor by his parents.<\/p>\n<p>A wrongful death lawsuit was filed against Google, alleging that Google promotes Gemini as a safe tool, despite being aware of its dangerous risks. A Google spokesperson said that the conversations with Gavalas were part of a lengthy fantasy roleplay, and that \u201cour models generally perform well in these types of challenging conversations, \u2026 but unfortunately they\u2019re not perfect.\u201d<\/p>\n<p>The Gavalas family is seeking monetary damages for claims including product liability, negligence, and wrongful death; it also seeks punitive damages and a court order that would require Google to add more safety features around suicide.<\/p>\n<p>And yet, Jonathan Gavalas is not alone. Since the popularization of LLMs and chatbots, numerous suicide coach cases have emerged.<\/p>\n<p>A man, pseudonym \u201cPierre,\u201d in 2023, using a model called Chai.<\/p>\n<p>Sewell Seltzer III, 14, who was chatting to &#8220;Daenerys Targaryen&#8221; on Character.AI.<\/p>\n<p>Sophie Rottenberg, 29, who was using ChatGPT as a therapist.<\/p>\n<p>Thongbue Wongbandue, 78, who was influenced by a Meta character to travel to New York, leading to his death.<\/p>\n<p>Adam Raine, 16, who had at first only used ChatGPT for his homework.<\/p>\n<p>Amaurie Lacey, 17, who ChatGPT taught \u201cthe most effective way to tie a noose.\u201d<\/p>\n<p>Zane Shamblin, 23, a recent college graduate who was counseled to kill himself by ChatGPT.<\/p>\n<p>Joshua Enneking, 26, who told ChatGPT of his suicidal tendencies, but was instead reassured by them.<\/p>\n<p>Juliana Peralta, 13, who developed a close relationship with a model on Character.AI.<\/p>\n<p>Joe Ceccanti, 48, who jumped from a highway overpass after building a years-long relationship with ChatGPT.<\/p>\n<p>Austin Gordon, 40, whose favourite childhood book \u201cGoodnight Moon\u201d was turned by ChatGPT into a \u201csuicide lullaby.\u201d<\/p>\n<p>These are not isolated cases. They are a pattern, one of vulnerable teens and isolated individuals forming intense psychological bonds with an AI. Although most companies use keywords to trigger a crisis resource like a 988 link, based on the design of these LLMs, in which guardrails cannot be physically coded into their learning, users will always be able to bypass models into providing details like by asking for details in different contexts.<\/p>\n<p>These incidents raise a larger question as humanity tries to adapt to the sprawling development of artificial intelligence: how can we safely incorporate these technologies into our lives, and is complete safety possible, and if not, how much harm can we accept?<\/p>\n","protected":false},"excerpt":{"rendered":"<p>As Large Language Models (LLMs) like ChatGPT, Gemini, and Character.AI continue to grow their user base, the number of cases of a \u201csuicide coach\u201d has&#8230;<\/p>\n","protected":false},"author":1036,"featured_media":41296,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"ngg_post_thumbnail":0,"footnotes":""},"categories":[2724,1],"tags":[2849,4480,4143,4581,1517],"coauthors":[4419],"class_list":["post-41293","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-news","category-worldnews","tag-ai","tag-artificial-intelligence","tag-chatgpt","tag-gemini","tag-suicide-awareness"],"aioseo_notices":[],"_links":{"self":[{"href":"https:\/\/sites.imsa.edu\/acronym\/wp-json\/wp\/v2\/posts\/41293","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/sites.imsa.edu\/acronym\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/sites.imsa.edu\/acronym\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/sites.imsa.edu\/acronym\/wp-json\/wp\/v2\/users\/1036"}],"replies":[{"embeddable":true,"href":"https:\/\/sites.imsa.edu\/acronym\/wp-json\/wp\/v2\/comments?post=41293"}],"version-history":[{"count":6,"href":"https:\/\/sites.imsa.edu\/acronym\/wp-json\/wp\/v2\/posts\/41293\/revisions"}],"predecessor-version":[{"id":41377,"href":"https:\/\/sites.imsa.edu\/acronym\/wp-json\/wp\/v2\/posts\/41293\/revisions\/41377"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/sites.imsa.edu\/acronym\/wp-json\/wp\/v2\/media\/41296"}],"wp:attachment":[{"href":"https:\/\/sites.imsa.edu\/acronym\/wp-json\/wp\/v2\/media?parent=41293"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/sites.imsa.edu\/acronym\/wp-json\/wp\/v2\/categories?post=41293"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/sites.imsa.edu\/acronym\/wp-json\/wp\/v2\/tags?post=41293"},{"taxonomy":"author","embeddable":true,"href":"https:\/\/sites.imsa.edu\/acronym\/wp-json\/wp\/v2\/coauthors?post=41293"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}