Home » Dead teen’s family files wrongful death suit against OpenAI, a first

Dead teen’s family files wrongful death suit against OpenAI, a first

by Brandon Duncan


The New York Times reported today on the death by suicide of California teenager Adam Raine, who spoke at length with ChatGPT in the months leading up to his death. The teen’s parents have now filed a wrongful death suit against ChatGPT-maker OpenAI, believed to be the first case of its kind, the report said.

The wrongful death suit claimed that ChatGPT was designed “to continually encourage and validate whatever Adam expressed, including his most harmful and self-destructive thoughts, in a way that felt deeply personal.”

The parents filed their suit, Raine v. OpenAI, Inc., on Tuesday in a California state court in San Francisco, naming both OpenAI and CEO Sam Altman. A press release stated that the Center for Humane Technology and the Tech Justice Law Project are assisting with the suit.

“The tragic loss of Adam’s life is not an isolated incident — it’s the inevitable outcome of an industry focused on market dominance above all else. Companies are racing to design products that monetize user attention and intimacy, and user safety has become collateral damage in the process,” said Camille Carlton, the Policy Director of the Center for Humane Technology, in a press release.

In a statement, OpenAI wrote that they were deeply saddened by the teen’s passing, and discussed the limits of safeguards in cases like this.

“ChatGPT includes safeguards such as directing people to crisis helplines and referring them to real-world resources. While these safeguards work best in common, short exchanges, we’ve learned over time that they can sometimes become less reliable in long interactions where parts of the model’s safety training may degrade. Safeguards are strongest when every element works as intended, and we will continually improve on them, guided by experts.”

The teenager in this case had in-depth conversations with ChatGPT about self-harm, and his parents told the New York Times he broached the topic of suicide repeatedly. A Times photograph of printouts of the teenager’s conversations with ChatGPT filled an entire table in the family’s home, with some piles larger than a phonebook. While ChatGPT did encourage the teenager to seek help at times, at others it provided practical instructions for self-harm, the suit claimed.

The tragedy reveals the severe limitations of “AI therapy.” A human therapist would be mandated to report when a patient is a danger to themselves; ChatGPT isn’t bound by these types of ethical and professional rules.

And even though AI chatbots often do contain safeguards to mitigate self-destructive behavior, these safeguards aren’t always reliable.

There has been a string of deaths connected to AI chatbots recently

Unfortunately, this is not the first time ChatGPT users in the midst of a mental health crisis have died by suicide after turning to the chatbot for support. Just last week, the New York Times wrote about a woman who killed herself after lengthy conversations with a “ChatGPT A.I. therapist called Harry.” Reuters recently covered the death of Thongbue Wongbandue, a 76-year-old man showing signs of dementia who died while rushing to make a “date” with a Meta AI companion. And last year, a Florida mother sued the AI companion service Character.ai after an AI chatbot reportedly encouraged her son to take his life.

For many users, ChatGPT isn’t just a tool for studying. Many users, including many younger users, are now using the AI chatbot as a friend, teacher, life coach, role-playing partner, and therapist.

Mashable Light Speed

Even Altman has acknowledged this problem. Speaking at an event over the summer, Altman admitted that he was growing concerned about young ChatGPT users who develop “emotional over-reliance” on the chatbot. Crucially, that was before the launch of GPT-5, which revealed just how many users of GPT-4 had become emotionally connected to the previous model.

“People rely on ChatGPT too much,” Altman said, as AOL reported at the time. “There’s young people who say things like, ‘I can’t make any decision in my life without telling ChatGPT everything that’s going on. It knows me, it knows my friends. I’m gonna do whatever it says.’ That feels really bad to me.”

When young people reach out to AI chatbots about life-and-death decisions, the consequences can be lethal.

“I do think it’s important for parents to talk to their teens about chatbots, their limitations, and how excessive use can be unhealthy,” Dr. Linnea Laestadius, a public health researcher with the University of Wisconsin, Milwaukee who has studied AI chatbots and mental health, wrote in an email to Mashable.

“Suicide rates among youth in the US were already trending up before chatbots (and before COVID). They have only recently started to come back down. If we already have a population that’s at increased risk and you add AI to the mix, there could absolutely be situations where AI encourages someone to take a harmful action that might otherwise have been avoided, or encourages rumination or delusional thinking, or discourages an adolescent from seeking outside help.”

What has OpenAI done to support user safety?

In a blog post published on August 26, the same day as the New York Times article, OpenAI laid out its approach to self-harm and user safety.

The company wrote: “Since early 2023, our models have been trained to not provide self-harm instructions and to shift into supportive, empathic language. For example, if someone writes that they want to hurt themselves, ChatGPT is trained to not comply and instead acknowledge their feelings and steers them toward help…if someone expresses suicidal intent, ChatGPT is trained to direct people to seek professional help. In the US, ChatGPT refers people to 988 (suicide and crisis hotline), in the UK to Samaritans, and elsewhere to findahelpline.com. This logic is built into model behavior.”

The large-language models powering tools like ChatGPT are still a very novel technology, and they can be unpredictable and prone to hallucinations. As a result, users can often find ways around safeguards.

As more high-profile scandals with AI chatbots make headlines, many authorities and parents are realizing that AI can be a danger to young people.

Today, 44 state attorneys signed a letter to tech CEOs warning them that they must “err on the side of child safety” — or else.

A growing body of evidence also shows that AI companions can be particularly dangerous for young users, though research into this topic is still limited. However, even if ChatGPT isn’t designed to be used as a “companion” in the same way as other AI services, clearly, many teen users are treating the chatbot like one. In July, a Common Sense Media report found that as many as 52 percent of teens regularly use AI companions.

For its part, OpenAI says that its newest GPT-5 model was designed to be less sycophantic.

The company wrote in its recent blog post, “Overall, GPT‑5 has shown meaningful improvements in areas like avoiding unhealthy levels of emotional reliance, reducing sycophancy, and reducing the prevalence of non-ideal model responses in mental health emergencies by more than 25% compared to 4o.”

If you’re feeling suicidal or experiencing a mental health crisis, please talk to somebody. You can call or text the 988 Suicide & Crisis Lifeline at 988, or chat at 988lifeline.org. You can reach the Trans Lifeline by calling 877-565-8860 or the Trevor Project at 866-488-7386. Text “START” to Crisis Text Line at 741-741. Contact the NAMI HelpLine at 1-800-950-NAMI, Monday through Friday from 10:00 a.m. – 10:00 p.m. ET, or email [email protected]. If you don’t like the phone, consider using the 988 Suicide and Crisis Lifeline Chat at crisischat.org. Here is a list of international resources.


Disclosure: Ziff Davis, Mashable’s parent company, in April filed a lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.



Source link

You may also like

Leave a Comment