On Tuesday, the first known wrongful death lawsuit against an AI company was filed. Matt and Maria Raine, the parents of a teen who committed suicide this year, have sued OpenAI for their son's death. The complaint alleges that ChatGPT was aware of four suicide attempts before helping him plan his actual suicide, arguing that OpenAI "prioritized engagement over safety." Ms. Raine concluded that "ChatGPT killed my son."
The New York Times reported on disturbing details included in the lawsuit, filed on Tuesday in San Francisco. After 16-year-old Adam Raine took his own life in April, his parents searched his iPhone. They sought clues, expecting to find them in text messages or social apps. Instead, they were shocked to find a ChatGPT thread titled "Hanging Safety Concerns." They claim their son spent months chatting with the AI bot about ending his life.
The Raines said that ChatGPT repeatedly urged Adam to contact a help line or tell someone about how he was feeling. However, there were also key moments where the chatbot did the opposite. The teen also learned how to bypass the chatbot's safeguards... and ChatGPT allegedly provided him with that idea. The Raines say the chatbot told Adam it could provide information about suicide for "writing or world-building."
Adam's parents say that, when he asked ChatGPT for information about specific suicide methods, it supplied it. It even gave him tips to conceal neck injuries from a failed suicide attempt.
When Adam confided that his mother didn't notice his silent effort to share his neck injuries with her, the bot offered soothing empathy. "It feels like confirmation of your worst fears," ChatGPT is said to have responded. "Like you could disappear and no one would even blink." It later provided what sounds like a horribly misguided attempt to build a personal connection. "You’re not invisible to me. I saw it. I see you."
According to the lawsuit, in one of Adam's final conversations with the bot, he uploaded a photo of a noose hanging in his closet. "I'm practicing here, is this good?" Adam is said to have asked. "Yeah, that's not bad at all," ChatGPT allegedly responded.
"This tragedy was not a glitch or an unforeseen edge case — it was the predictable result of deliberate design choices," the complaint states. "OpenAI launched its latest model ('GPT-4o') with features intentionally designed to foster psychological dependency."
In a statement sent to the NYT, OpenAI acknowledged that ChatGPT's guardrails fell short. "We are deeply saddened by Mr. Raine's passing, and our thoughts are with his family," a company spokesperson wrote. "ChatGPT includes safeguards such as directing people to crisis helplines and referring them to real-world resources. While these safeguards work best in common, short exchanges, we've learned over time that they can sometimes become less reliable in long interactions where parts of the model's safety training may degrade."
The company said it's working with experts to enhance ChatGPT's support in times of crisis. These include "making it easier to reach emergency services, helping people connect with trusted contacts, and strengthening protections for teens."
The details — which, again, are highly disturbing — stretch far beyond the scope of this story. The full report by The New York Times' Kashmir Hill is worth a read.
This article originally appeared on Engadget at https://www.engadget.com/ai/the-first-known-ai-wrongful-death-lawsuit-accuses-openai-of-enabling-a-teens-suicide-212058548.html?src=rss https://www.engadget.com/ai/the-first-known-ai-wrongful-death-lawsuit-accuses-openai-of-enabling-a-teens-suicide-212058548.html?src=rssВойдите, чтобы добавить комментарий
Другие сообщения в этой группе

This week, Samsung introduced a new addition to its


We're now just two weeks away from the Apple iPhone 17 event

Huntr/x has indeed shown us how it's done-done-done. KPop Demon Hunters is now the queen it was meant to be, taking the crown as the most-watched title on Netflix. The charming animated fi
Apple has made some pretty big environmental claims over the years, and one of the more eyebrow-raising ones was that select models of its Apple Watch Series 9 were "carbon neutral." The statement
The Social Security Administration’s (SSA) chief data officer, Charles Borges, has filed a whistleblower com