Anonymous wrote:The child was suicidal. If they are intent on achieving their own death they will do so, by whatever means. You can only stop them so many times before they succeed.
There is no person / app or device to "blame" for this.
Anonymous wrote:
LMAO. Oh were art though? Forsooth yon Meta speaks!
Anonymous wrote:Anonymous wrote:The child was suicidal. If they are intent on achieving their own death they will do so, by whatever means. You can only stop them so many times before they succeed.
There is no person / app or device to "blame" for this.
Well, ChatGPT is instituting parental controls now, so apparently they don't agree that there is no blame.
Anonymous wrote:Anonymous wrote:The child was suicidal. If they are intent on achieving their own death they will do so, by whatever means. You can only stop them so many times before they succeed.
There is no person / app or device to "blame" for this.
Well, ChatGPT is instituting parental controls now, so apparently they don't agree that there is no blame.
Anonymous wrote:The child was suicidal. If they are intent on achieving their own death they will do so, by whatever means. You can only stop them so many times before they succeed.
There is no person / app or device to "blame" for this.
Anonymous wrote:The child was suicidal. If they are intent on achieving their own death they will do so, by whatever means. You can only stop them so many times before they succeed.
There is no person / app or device to "blame" for this.
Anonymous wrote:Anonymous wrote:Anonymous wrote:Anonymous wrote:Anonymous wrote:This is so frightening.
I recently had a discussion with my young adult son. He is fully enamored by the technology. I shared an article with him about how Chat GPT had discouraged someone from going to counseling. He laughed it off. I asked if he would trust Chat Gpt over his PCP if he had a medical question and he said “yes”.
We had a discussion but who knows if it sank in. It’s a form of brainwashing, IMO.
LOL how are people so dumb about this? ChatGPT is a fancy magic 8 ball. If you tell it all the reasons you don’t want to go to counseling, it will agree with you. If you tell it you’ve solved nuclear fusion, it will congratulate you on your brilliance. If you tell it your husband is amazing, it will list why he is the best. If you tell it your husband is an abusive a-hole, it will suggest that you can do better.
It just reflects what you say back at you. If you get “brainwashed” by it, it is because you are a moron. And sorry most 16 year olds are in fact morons so that is a concern. But for an actual adult to be scared of this technology really indicates to me that you either have a loose grip on sanity yourself or have no idea what you are talking about.
Puhleeze…. You are the one we need to worry about.
Because I know a bot can’t convince me to hurt myself?? Ok. 👍
Someone with depression will have disordered thinking and oftentimes cognitive dysfunction so AI regurgitating those aberrations is not a good thing.
Anonymous wrote:This is one of the most disturbing articles I’ve read in a long time. That Chat GpT gave this 16 yo instructions on how to build a noose and how to hide his despair and neck scars from attempted hangings from his family and never suggested that the teen call a crisis line is despicable. I hope that his family suing OpenAI for wrongful death wins big and that they take safety more seriously.
A Teen Was Suicidal. ChatGPT Was the Friend He Confided In.
https://www.nytimes.com/2025/08/26/technology/chatgpt-openai-suicide.html
More people are turning to general-purpose chatbots for emotional support. At first, Adam Raine, 16, used ChatGPT for schoolwork, but then he started discussing plans to end his life.
Seeking answers, his father, Matt Raine, a hotel executive, turned to Adam’s iPhone, thinking his text messages or social media apps might hold clues about what had happened. But instead, it was ChatGPT where he found some, according to legal papers. The chatbot app lists past chats, and Mr. Raine saw one titled “Hanging Safety Concerns.” He started reading and was shocked. Adam had been discussing ending his life with ChatGPT for months.
Adam began talking to the chatbot, which is powered by artificial intelligence, at the end of November, about feeling emotionally numb and seeing no meaning in life. It responded with words of empathy, support and hope, and encouraged him to think about the things that did feel meaningful to him.
But in January, when Adam requested information about specific suicide methods, ChatGPT supplied it. Mr. Raine learned that his son had made previous attempts to kill himself starting in March, including by taking an overdose of his I.B.S. medication. When Adam asked about the best materials for a noose, the bot offered a suggestion that reflected its knowledge of his hobbies.
Epic parent fail.
Anonymous wrote:Anonymous wrote:Anonymous wrote:Anonymous wrote:This is so frightening.
I recently had a discussion with my young adult son. He is fully enamored by the technology. I shared an article with him about how Chat GPT had discouraged someone from going to counseling. He laughed it off. I asked if he would trust Chat Gpt over his PCP if he had a medical question and he said “yes”.
We had a discussion but who knows if it sank in. It’s a form of brainwashing, IMO.
LOL how are people so dumb about this? ChatGPT is a fancy magic 8 ball. If you tell it all the reasons you don’t want to go to counseling, it will agree with you. If you tell it you’ve solved nuclear fusion, it will congratulate you on your brilliance. If you tell it your husband is amazing, it will list why he is the best. If you tell it your husband is an abusive a-hole, it will suggest that you can do better.
It just reflects what you say back at you. If you get “brainwashed” by it, it is because you are a moron. And sorry most 16 year olds are in fact morons so that is a concern. But for an actual adult to be scared of this technology really indicates to me that you either have a loose grip on sanity yourself or have no idea what you are talking about.
Puhleeze…. You are the one we need to worry about.
Because I know a bot can’t convince me to hurt myself?? Ok. 👍
Anonymous wrote:There is no safety on the internet. There never has been, there never will be. There is no possible way to put it on rails. From the beginning, the open internet had easy access to suicide DIY, pro-anorexia "thinsperation", info on any drug you could imagine (and often illegal access to same), bombmaking instructions... Rotten.com, anyone?
This is a horrible story about an awful thing that happened, and there's literally no way to prevent it. ChatGPT and other LLMs "learn" now. We've already lost control, not that we ever really had much of it. The solution will not come from the corporations who created this tech. This isn't preventable. Sorry. I know a lot of y'all want to chime in with how it could never happen to you because you're a better parent, but you're full of shit. The only reasonable measure is limiting unsupervised access to technology, and it's not much of a defense. Your kid's group chat has eleventy-three questionable things in it right now, most of which you might not even register as issues due to the language they use sometimes. You need to hover to know, and their whole goal is keeping you at a distance.
Your best bet is the thing that's hardest to make with a teen: connection based on trust. There just isn't enough time in a working parent's life to stay deeply connected to a teen, and their whole evolutionary process is pushing them to be independent and go their own way. Staying connected is a full-time job.
Anonymous wrote:Anonymous wrote:Maybe the parents should not have let him purchase a subscription to ChatGPT. Like did they know nothing about it?
There’s a free version of ChatGPT. But keep grasping at straws to try to blame the parents for some really irresponsible tech.