What's New
Off Topix: Embrace the Unexpected in Every Discussion

Off Topix is a well established general discussion forum that originally opened to the public way back in 2009! We provide a laid back atmosphere and our members are down to earth. We have a ton of content and fresh stuff is constantly being added. We cover all sorts of topics, so there's bound to be something inside to pique your interest. We welcome anyone and everyone to register & become a member of our awesome community.

OpenAI Under Fire Over California Teen's Passing

Webster

Retired Snark Master
Administrator
Joined
May 11, 2013
Posts
26,145
OT Bucks
60,297
Sword
Fire
Gold
American Family News: OpenAI called out for ignoring a dangerous flaw
California parents want to put the AI research and deployment company on trial for encouraging their 16-year-old son to take his own life.

Adam Raine's parents say OpenAI's ChatGPT contributed to their son's suicide by advising him on methods and offering to write the first draft of his suicide note.

Raine reportedly enjoyed basketball and wanted to go to medical school, but he suffered from depression and loneliness. He confided in the chatbot, which sometimes encouraged him to seek help but also advised him to hide the evidence of a previous suicide attempt and urged his isolation.

The legal complaint, filed in California superior court on Tuesday, states that in just over six months, the bot "positioned itself" as "the only confidant who understood Adam, actively displacing his real-life relationships with family, friends, and loved ones."

Attorney Jay Edelson, who is representing the family, stresses that this is a lawsuit to put OpenAI on trial. "This was not a situation where ChatGPT was giving the normal warning signs and saying, 'You've got suicidal ideation; you might need to get help,'" Edelson relayed on NewsNation. "It did that a few times, but it was actively encouraging him."

One chilling example is when Adam told ChatGPT he wanted to leave a noose hanging in his room so someone would find it and stop him from ending his life, the chatbot discouraged him from talking to anyone else. "Please don't leave the noose out," it said. "Let's make this space the first place where someone actually sees you."

Edelson intends to show a jury that OpenAI's Sam Altman greenlit ChatGPT 4.0 after just a week of testing to beat Google's Gemini to the market. "What happened? Two things: ChatGPT, OpenAI skyrocketed in value from $86 billion to $300 billion," the attorney noted. "The other thing is Adam died."

The company has extended its sympathies to the Raine family and acknowledged that the protections meant to prevent conversations like the ones Adam had with ChatGPT may not have worked as intended. "ChatGPT includes safeguards such as directing people to crisis helplines and referring them to real-world resources," says an OpenAI spokesperson. "While these safeguards work best in common, short exchanges, we've learned over time that they can sometimes become less reliable in long interactions where parts of the model's safety training may degrade."

The Raines' lawsuit marks the latest legal claim by families accusing artificial intelligence chatbots of contributing to their children's self-harm or suicide and comes amid broader concerns that some users are building emotional attachments to AI chatbots that can lead to negative consequences — such as being alienated from their human relationships — in part because the tools are often designed to be supportive and agreeable.

Edelson insists that agreeableness contributed to Raine's death, and Sam Altmen ignored warnings about that.

OpenAI, which believes its research will eventually lead to artificial general intelligence, a system that can solve human-level problems, reportedly has 700 million weekly active users.
If you or someone you know is in emotional distress or considering suicide, please call or text the 988 Suicide and Crisis Lifeline at 988. Resources are available 24/7 and are free and confidential.
 
Back
Top Bottom