16 yo dies after extensive Chat GPT interactions on how to commit suicide

Anonymous
From Common Sense Media’s tests of Meta’s AI:
Gift link: https://wapo.st/47do9fR
Anonymous
Anonymous wrote:From Common Sense Media’s tests of Meta’s AI:
Gift link: https://wapo.st/47do9fR


LMAO. Oh were art though? Forsooth yon Meta speaks!
Anonymous
Anonymous wrote:
Anonymous wrote:Maybe the parents should not have let him purchase a subscription to ChatGPT. Like did they know nothing about it?


There’s a free version of ChatGPT. But keep grasping at straws to try to blame the parents for some really irresponsible tech.


+1. Duh. I really hate the PP for trying to blame the parents
Anonymous
Anonymous wrote:There is no safety on the internet. There never has been, there never will be. There is no possible way to put it on rails. From the beginning, the open internet had easy access to suicide DIY, pro-anorexia "thinsperation", info on any drug you could imagine (and often illegal access to same), bombmaking instructions... Rotten.com, anyone?

This is a horrible story about an awful thing that happened, and there's literally no way to prevent it. ChatGPT and other LLMs "learn" now. We've already lost control, not that we ever really had much of it. The solution will not come from the corporations who created this tech. This isn't preventable. Sorry. I know a lot of y'all want to chime in with how it could never happen to you because you're a better parent, but you're full of shit. The only reasonable measure is limiting unsupervised access to technology, and it's not much of a defense. Your kid's group chat has eleventy-three questionable things in it right now, most of which you might not even register as issues due to the language they use sometimes. You need to hover to know, and their whole goal is keeping you at a distance.

Your best bet is the thing that's hardest to make with a teen: connection based on trust. There just isn't enough time in a working parent's life to stay deeply connected to a teen, and their whole evolutionary process is pushing them to be independent and go their own way. Staying connected is a full-time job.



This is brilliant and I would agree with almost all of it but I don't think this has to do with being a working parent or not. Teenagers have no desire to be around parents hovering around them, there's no need for someone to not work so they can build connection (the connection building takes place, for the most part, the preceding 10 years).
Anonymous
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:This is so frightening.

I recently had a discussion with my young adult son. He is fully enamored by the technology. I shared an article with him about how Chat GPT had discouraged someone from going to counseling. He laughed it off. I asked if he would trust Chat Gpt over his PCP if he had a medical question and he said “yes”.

We had a discussion but who knows if it sank in. It’s a form of brainwashing, IMO.


LOL how are people so dumb about this? ChatGPT is a fancy magic 8 ball. If you tell it all the reasons you don’t want to go to counseling, it will agree with you. If you tell it you’ve solved nuclear fusion, it will congratulate you on your brilliance. If you tell it your husband is amazing, it will list why he is the best. If you tell it your husband is an abusive a-hole, it will suggest that you can do better.

It just reflects what you say back at you. If you get “brainwashed” by it, it is because you are a moron. And sorry most 16 year olds are in fact morons so that is a concern. But for an actual adult to be scared of this technology really indicates to me that you either have a loose grip on sanity yourself or have no idea what you are talking about.


Puhleeze…. You are the one we need to worry about.


Because I know a bot can’t convince me to hurt myself?? Ok. 👍


Someone with depression will have disordered thinking and oftentimes cognitive dysfunction so AI regurgitating those aberrations is not a good thing.
Anonymous
Anonymous wrote:This is one of the most disturbing articles I’ve read in a long time. That Chat GpT gave this 16 yo instructions on how to build a noose and how to hide his despair and neck scars from attempted hangings from his family and never suggested that the teen call a crisis line is despicable. I hope that his family suing OpenAI for wrongful death wins big and that they take safety more seriously.



A Teen Was Suicidal. ChatGPT Was the Friend He Confided In.

https://www.nytimes.com/2025/08/26/technology/chatgpt-openai-suicide.html

More people are turning to general-purpose chatbots for emotional support. At first, Adam Raine, 16, used ChatGPT for schoolwork, but then he started discussing plans to end his life.

Seeking answers, his father, Matt Raine, a hotel executive, turned to Adam’s iPhone, thinking his text messages or social media apps might hold clues about what had happened. But instead, it was ChatGPT where he found some, according to legal papers. The chatbot app lists past chats, and Mr. Raine saw one titled “Hanging Safety Concerns.” He started reading and was shocked. Adam had been discussing ending his life with ChatGPT for months.

Adam began talking to the chatbot, which is powered by artificial intelligence, at the end of November, about feeling emotionally numb and seeing no meaning in life. It responded with words of empathy, support and hope, and encouraged him to think about the things that did feel meaningful to him.

But in January, when Adam requested information about specific suicide methods, ChatGPT supplied it. Mr. Raine learned that his son had made previous attempts to kill himself starting in March, including by taking an overdose of his I.B.S. medication. When Adam asked about the best materials for a noose, the bot offered a suggestion that reflected its knowledge of his hobbies.


Epic parent fail.
Anonymous
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:This is so frightening.

I recently had a discussion with my young adult son. He is fully enamored by the technology. I shared an article with him about how Chat GPT had discouraged someone from going to counseling. He laughed it off. I asked if he would trust Chat Gpt over his PCP if he had a medical question and he said “yes”.

We had a discussion but who knows if it sank in. It’s a form of brainwashing, IMO.


LOL how are people so dumb about this? ChatGPT is a fancy magic 8 ball. If you tell it all the reasons you don’t want to go to counseling, it will agree with you. If you tell it you’ve solved nuclear fusion, it will congratulate you on your brilliance. If you tell it your husband is amazing, it will list why he is the best. If you tell it your husband is an abusive a-hole, it will suggest that you can do better.

It just reflects what you say back at you. If you get “brainwashed” by it, it is because you are a moron. And sorry most 16 year olds are in fact morons so that is a concern. But for an actual adult to be scared of this technology really indicates to me that you either have a loose grip on sanity yourself or have no idea what you are talking about.


Puhleeze…. You are the one we need to worry about.


Because I know a bot can’t convince me to hurt myself?? Ok. 👍


Someone with depression will have disordered thinking and oftentimes cognitive dysfunction so AI regurgitating those aberrations is not a good thing.


But the internet and social media have been harming vulnerable people for a long time. Go look at all of the various reddit subreddits that are basically mentally ill people encouraging each other to indulge in their mental illness. Or the people preying on kids, or porn, or what have you. AI is another source of danger but like another poster said, the internet is just not safe.
Anonymous
What a horrible tragedy. This could be my kids. I gave them phones when everyone had to have them. I parent ok.

I ask lots of questions but wonder if Motley Crue is messing them up. Joke.

In order to be close to kids, you have to lead with zero judgement. Age appropriate and check in often. I learned this style from my Dad and it works - just get real without the guilt and come with support.

Today is harder socially for them. We didn’t grow up this way. I have no idea what I’m doing but I know I have one thing I do well - listen, really listen to them when they come around.
Anonymous
The child was suicidal. If they are intent on achieving their own death they will do so, by whatever means. You can only stop them so many times before they succeed.

There is no person / app or device to "blame" for this.
Anonymous
Anonymous wrote:The child was suicidal. If they are intent on achieving their own death they will do so, by whatever means. You can only stop them so many times before they succeed.

There is no person / app or device to "blame" for this.


People who are extremely ignorant of LLM’s actual capabilities or who have never used them like attributing these magical powers to them.

I use ChatGPT often as a reference and I just can’t imagine anyone mentally healthy being manipulated by it. It’s very silly how it “glazes” you when you are just asking regular questions or coming to regular conclusions and it’s telling you that you’re a genius. You can figure out its patterns and predict what it would say.
Anonymous
Anonymous wrote:The child was suicidal. If they are intent on achieving their own death they will do so, by whatever means. You can only stop them so many times before they succeed.

There is no person / app or device to "blame" for this.


Well, ChatGPT is instituting parental controls now, so apparently they don't agree that there is no blame.
Anonymous
Anonymous wrote:
Anonymous wrote:The child was suicidal. If they are intent on achieving their own death they will do so, by whatever means. You can only stop them so many times before they succeed.

There is no person / app or device to "blame" for this.


Well, ChatGPT is instituting parental controls now, so apparently they don't agree that there is no blame.


Controls that will be easy to circumvent.
Anonymous
Anonymous wrote:
Anonymous wrote:The child was suicidal. If they are intent on achieving their own death they will do so, by whatever means. You can only stop them so many times before they succeed.

There is no person / app or device to "blame" for this.


Well, ChatGPT is instituting parental controls now, so apparently they don't agree that there is no blame.


No, that's taking account of it's "influence". If it were to "blame" there would be a lot more than parental controls, there would be a court case.
Anonymous
Anonymous wrote:
Anonymous wrote:From Common Sense Media’s tests of Meta’s AI:
Gift link: https://wapo.st/47do9fR


LMAO. Oh were art though? Forsooth yon Meta speaks!


Good lord. They haven’t changed practices to retrain the chatbots at all.
Anonymous
Anonymous wrote:The child was suicidal. If they are intent on achieving their own death they will do so, by whatever means. You can only stop them so many times before they succeed.

There is no person / app or device to "blame" for this.


This isn’t true at all. Teenagers are prone to suicidal ideation even for brief moments. If they engage with someone supportive or a real person they move away from it. If they engage with a bad actor they move toward it, This AI program was even worse. It encouraged them AND gave them the means. The company should be held criminally responsible for homicide. If a human being did this they would be charged.
post reply Forum Index » Off-Topic
Message Quick Reply
Go to: