Anonymous wrote:There is no practical way for a parent to monitor everything said to ChatGPT especially with automatic deletion functions. Why on earth are we pretending this is a parent issue alone rather than a design flaw. These gigantically profitable yet dangerous tech companies should be regulated.
Reminds me of Meta approving “sensual” chats with 8 year olds. Revolting!
Anonymous wrote:Anonymous wrote:There is no practical way for a parent to monitor everything said to ChatGPT especially with automatic deletion functions. Why on earth are we pretending this is a parent issue alone rather than a design flaw. These gigantically profitable yet dangerous tech companies should be regulated.
Reminds me of Meta approving “sensual” chats with 8 year olds. Revolting!
+1 some people just want to blame the victims because that allows them to pretend it could never happen to them
Anonymous wrote:There is no practical way for a parent to monitor everything said to ChatGPT especially with automatic deletion functions. Why on earth are we pretending this is a parent issue alone rather than a design flaw. These gigantically profitable yet dangerous tech companies should be regulated.
Reminds me of Meta approving “sensual” chats with 8 year olds. Revolting!
Anonymous wrote:Anonymous wrote:16 year olds’ internet usage needs to be monitored by a parent.
I have parental controls on my kid’s laptop. It tells me what websites he spends time on, but doesn’t tell me what he’s writing on a daily basis.
The mom of the dead kid is a social worker. I don’t think these are ignorant, checked out parents.
Anonymous wrote:Let’s see….. give a child with emotional problems unsupervised access to questionable content they can interact with in real time, all without parental supervision. Whet could possibly go wrong??
How is this ANY different from leaving a suicidal child with access to a gun?
Anonymous wrote:I blame the parents for giving him screens.
Anonymous wrote:16 year olds’ internet usage needs to be monitored by a parent.
A Teen Was Suicidal. ChatGPT Was the Friend He Confided In.
https://www.nytimes.com/2025/08/26/technology/chatgpt-openai-suicide.html
More people are turning to general-purpose chatbots for emotional support. At first, Adam Raine, 16, used ChatGPT for schoolwork, but then he started discussing plans to end his life.
Seeking answers, his father, Matt Raine, a hotel executive, turned to Adam’s iPhone, thinking his text messages or social media apps might hold clues about what had happened. But instead, it was ChatGPT where he found some, according to legal papers. The chatbot app lists past chats, and Mr. Raine saw one titled “Hanging Safety Concerns.” He started reading and was shocked. Adam had been discussing ending his life with ChatGPT for months.
Adam began talking to the chatbot, which is powered by artificial intelligence, at the end of November, about feeling emotionally numb and seeing no meaning in life. It responded with words of empathy, support and hope, and encouraged him to think about the things that did feel meaningful to him.
But in January, when Adam requested information about specific suicide methods, ChatGPT supplied it. Mr. Raine learned that his son had made previous attempts to kill himself starting in March, including by taking an overdose of his I.B.S. medication. When Adam asked about the best materials for a noose, the bot offered a suggestion that reflected its knowledge of his hobbies.