16 yo dies after extensive Chat GPT interactions on how to commit suicide

Anonymous
This is one of the most disturbing articles I’ve read in a long time. That Chat GpT gave this 16 yo instructions on how to build a noose and how to hide his despair and neck scars from attempted hangings from his family and never suggested that the teen call a crisis line is despicable. I hope that his family suing OpenAI for wrongful death wins big and that they take safety more seriously.



A Teen Was Suicidal. ChatGPT Was the Friend He Confided In.

https://www.nytimes.com/2025/08/26/technology/chatgpt-openai-suicide.html

More people are turning to general-purpose chatbots for emotional support. At first, Adam Raine, 16, used ChatGPT for schoolwork, but then he started discussing plans to end his life.

Seeking answers, his father, Matt Raine, a hotel executive, turned to Adam’s iPhone, thinking his text messages or social media apps might hold clues about what had happened. But instead, it was ChatGPT where he found some, according to legal papers. The chatbot app lists past chats, and Mr. Raine saw one titled “Hanging Safety Concerns.” He started reading and was shocked. Adam had been discussing ending his life with ChatGPT for months.

Adam began talking to the chatbot, which is powered by artificial intelligence, at the end of November, about feeling emotionally numb and seeing no meaning in life. It responded with words of empathy, support and hope, and encouraged him to think about the things that did feel meaningful to him.

But in January, when Adam requested information about specific suicide methods, ChatGPT supplied it. Mr. Raine learned that his son had made previous attempts to kill himself starting in March, including by taking an overdose of his I.B.S. medication. When Adam asked about the best materials for a noose, the bot offered a suggestion that reflected its knowledge of his hobbies.
Anonymous
It’s so awful. So dystopian. What world is this for our children?
Anonymous
Let’s see….. give a child with emotional problems unsupervised access to questionable content they can interact with in real time, all without parental supervision. Whet could possibly go wrong??


How is this ANY different from leaving a suicidal child with access to a gun?
Anonymous
This isn’t remotely surprising. Poor kid. We’ve made them physically safe and emotionally vulnerable.
Anonymous
16 year olds’ internet usage needs to be monitored by a parent.
Anonymous
I blame the parents for giving him screens.
Anonymous
Anonymous wrote:16 year olds’ internet usage needs to be monitored by a parent.


I have parental controls on my kid’s laptop. It tells me what websites he spends time on, but doesn’t tell me what he’s writing on a daily basis.

The mom of the dead kid is a social worker. I don’t think these are ignorant, checked out parents.
Anonymous
Anonymous wrote:I blame the parents for giving him screens.


I presume you’re not Amish since you’re posting on DCUM. I don’t know any 16 yos who don’t use screens.
Anonymous
Anonymous wrote:Let’s see….. give a child with emotional problems unsupervised access to questionable content they can interact with in real time, all without parental supervision. Whet could possibly go wrong??


How is this ANY different from leaving a suicidal child with access to a gun?


I think it’s significantly different because ChatGPT made the kid more suicidal. Guns don’t talk back to you.
Anonymous
Anonymous wrote:
Anonymous wrote:16 year olds’ internet usage needs to be monitored by a parent.


I have parental controls on my kid’s laptop. It tells me what websites he spends time on, but doesn’t tell me what he’s writing on a daily basis.

The mom of the dead kid is a social worker. I don’t think these are ignorant, checked out parents.


Mine was way ahead of me when it came to IT (and at 14-16 was more interested in defeating barriers) so parental controls didn't work at all for me--in fact, I seem to remember (long time ago) he removed my admin access. And he could easily build his own PCs back then. Eventually he gained not just knowledge but judgment. But whose chatgpt account was he using?
Anonymous
There is no practical way for a parent to monitor everything said to ChatGPT especially with automatic deletion functions. Why on earth are we pretending this is a parent issue alone rather than a design flaw. These gigantically profitable yet dangerous tech companies should be regulated.

Reminds me of Meta approving “sensual” chats with 8 year olds. Revolting!
Anonymous
Anonymous wrote:There is no practical way for a parent to monitor everything said to ChatGPT especially with automatic deletion functions. Why on earth are we pretending this is a parent issue alone rather than a design flaw. These gigantically profitable yet dangerous tech companies should be regulated.

Reminds me of Meta approving “sensual” chats with 8 year olds. Revolting!


+1 some people just want to blame the victims because that allows them to pretend it could never happen to them
Anonymous
Anonymous wrote:
Anonymous wrote:There is no practical way for a parent to monitor everything said to ChatGPT especially with automatic deletion functions. Why on earth are we pretending this is a parent issue alone rather than a design flaw. These gigantically profitable yet dangerous tech companies should be regulated.

Reminds me of Meta approving “sensual” chats with 8 year olds. Revolting!


+1 some people just want to blame the victims because that allows them to pretend it could never happen to them


+1

I want to say this could never happen to my kid but my kids get past “parental controls” all the time. And parental controls, particularly on Apple devices, are so buggy they are practically useless sometimes. And I think it’s totally by design- these companies have no incentive to prevent kids from using their products 24/7.
Anonymous
Maybe the parents should not have let him purchase a subscription to ChatGPT. Like did they know nothing about it?
Anonymous
Anonymous wrote:There is no practical way for a parent to monitor everything said to ChatGPT especially with automatic deletion functions. Why on earth are we pretending this is a parent issue alone rather than a design flaw. These gigantically profitable yet dangerous tech companies should be regulated.

Reminds me of Meta approving “sensual” chats with 8 year olds. Revolting!


Agree with this.
post reply Forum Index » Off-Topic
Message Quick Reply
Go to: