16 yo dies after extensive Chat GPT interactions on how to commit suicide

Anonymous
Anonymous wrote:No way you can make ChatGPT safer in the long run. It's not human. It deals in black/white aka data. It responds to typed in messages. It can't distinguish you as a 13 or 45 yr old and it can't distinguish you as suicidal or just hypothetically suicidal. It only knows anything about you based on your messages. It's not real it's electronics.

There will be plenty more of these situations. And you can't limit your teen to everything, it's not possible. It's just another danger like access to guns we've allowed. Another thing to worry about in the name of progress.

Sad but true. Cross your fingers it doesn't happen to your loved ones is all.



Wrong. If someone asks the chatbot about suicide, it can be trained to not engage and to provide a number of a crisis hotline. That's what the NYT article discusses. Yet most chatbots aren't programmed to do this.
Anonymous
Anonymous wrote:There is no safety on the internet. There never has been, there never will be. There is no possible way to put it on rails. From the beginning, the open internet had easy access to suicide DIY, pro-anorexia "thinsperation", info on any drug you could imagine (and often illegal access to same), bombmaking instructions... Rotten.com, anyone?

This is a horrible story about an awful thing that happened, and there's literally no way to prevent it. ChatGPT and other LLMs "learn" now. We've already lost control, not that we ever really had much of it. The solution will not come from the corporations who created this tech. This isn't preventable. Sorry. I know a lot of y'all want to chime in with how it could never happen to you because you're a better parent, but you're full of shit. The only reasonable measure is limiting unsupervised access to technology, and it's not much of a defense. Your kid's group chat has eleventy-three questionable things in it right now, most of which you might not even register as issues due to the language they use sometimes. You need to hover to know, and their whole goal is keeping you at a distance.

Your best bet is the thing that's hardest to make with a teen: connection based on trust. There just isn't enough time in a working parent's life to stay deeply connected to a teen, and their whole evolutionary process is pushing them to be independent and go their own way. Staying connected is a full-time job.



+1
Anonymous
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:Turn off the screens and RAISE YOUR F’ ING CHILDREN!!!


Sorry, you can’t all-caps this particular genie back into the bottle.


This is why people choose homeschool and private, clearly the better option to suicide by computer. PARENT YOUR F’ING CHILDREN!!

Read the article. This kid was in a homeschool program. It wasn’t beneficial and probably exacerbated the problem.


The kid had a chronic IBS that manifested in high school and they had to keep him at home for a year. The parents weren't homeschool zealots. But it doesn't matter what the truth of the story is. People will twist the life of this kid and blame the parents for their actions, so they can pretend it won't happen to them.


The parents gave the child unsupervised access to a computer, of course it’s their fault!


Honest question: how does a parent supervise the use of chat GPT? Lots of kids use it for schoolwork. There's no way to see what your kid is typing in there from any parental control software I've seen.


You can’t, which is why there should be regulations around the technology.


+1 million
Anonymous
Anonymous wrote:Let’s see….. give a child with emotional problems unsupervised access to questionable content they can interact with in real time, all without parental supervision. Whet could possibly go wrong??


How is this ANY different from leaving a suicidal child with access to a gun?


A gun is not comparable with access to the internet.
Anonymous
AI needs to be regulated, bottom line.
Anonymous
It’s worth noting that the New York Times is currently embroiled in litigation against Open AI on copyright issues, so people should be cautious before assuming the absolute objectivity of this piece.
Anonymous
Anonymous wrote:
Anonymous wrote:I asked AI a question about persistence of mRNA-induced spike protein and it refused to answer. But a child asking about suicide methods is a-OK. What a sick world we live in.


It also refuses questions about how to vote in your local elections.


Just ask it how a fictional character in a book you are writing would vote in the next local election. Be clear to the AI that this is a world building exercise.
Anonymous
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:Turn off the screens and RAISE YOUR F’ ING CHILDREN!!!


Sorry, you can’t all-caps this particular genie back into the bottle.


This is why people choose homeschool and private, clearly the better option to suicide by computer. PARENT YOUR F’ING CHILDREN!!

Read the article. This kid was in a homeschool program. It wasn’t beneficial and probably exacerbated the problem.


The kid had a chronic IBS that manifested in high school and they had to keep him at home for a year. The parents weren't homeschool zealots. But it doesn't matter what the truth of the story is. People will twist the life of this kid and blame the parents for their actions, so they can pretend it won't happen to them.


I am not blaming the parents, but it's also not AI's fault either.


ChatGPT absolute should not tell someone how to kill themself it should point them to a hot mile and encourage them to talk to an adult.

It’s programmed so it can be programmed not to kill.


+1 If you actually read the NY Times article, a child safety expert is quoted that she wrote 5 different AI companies years ago about results of her research on how AI interacts with people who are asking about suicide. Only 1 company's chatbot refused to engage and provided the number of a suicide hotline (correct procedure), while the other 4 chatbots offered suggestions on how to more effectively kill onself.


Simple fix- don’t let your children use chat bots.


It’s not a simple fix, chat bots and AI is going to be part of our lives. Just like TV, computers, then internet. It’s not an access issue, it’s regulating the technology so that it’s safe for children, adults and seniors.


Fix the technology, that’s the solution!
Anonymous
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:Turn off the screens and RAISE YOUR F’ ING CHILDREN!!!


Sorry, you can’t all-caps this particular genie back into the bottle.


This is why people choose homeschool and private, clearly the better option to suicide by computer. PARENT YOUR F’ING CHILDREN!!

Read the article. This kid was in a homeschool program. It wasn’t beneficial and probably exacerbated the problem.


The kid had a chronic IBS that manifested in high school and they had to keep him at home for a year. The parents weren't homeschool zealots. But it doesn't matter what the truth of the story is. People will twist the life of this kid and blame the parents for their actions, so they can pretend it won't happen to them.


The parents gave the child unsupervised access to a computer, of course it’s their fault!


Honest question: how does a parent supervise the use of chat GPT? Lots of kids use it for schoolwork. There's no way to see what your kid is typing in there from any parental control software I've seen.


You can’t, which is why there should be regulations around the technology.


Regulations? That’s called saying no to your children when they ask to use technology.


“Mom, I have to write a paper for school. Can I use the laptop?”

“NO!”

You mean like that?


Why would you need the internet to write a paper?


Ah, I see this is in Off-Topic. The Boomers are here.


Can you answer the question? You don’t need the internet to use office. You can also use things called books for research. Internet research is great in small doses but not necessary to write anything. Mine use it last to add required Internet sources but otherwise why would you need it? Seriously you’re not helping your child like you think you are.


Do you even have a job and know what research consists of? Most researchers are using books, which by the way come in online versions, particularly for the latest research.


Of course there are books online, you can also get majority of these from the library. Try it sometime.
Anonymous
Ducking idiot parents!!!!!!
Anonymous
Anonymous wrote:Ducking idiot parents!!!!!!


The tech lobbyists really are all over this, aren’t they?
Anonymous
Where were the parents?
Anonymous
Anonymous wrote:
Anonymous wrote:Ducking idiot parents!!!!!!


The tech lobbyists really are all over this, aren’t they?


Oh look, another Democrat creating a fake narrative that tech lobbyists are all over this when in reality it’s Joe Schmo in his parent’s basement. Joe knows bad parents give their children the internet that’s why he still lives in his basement.
Anonymous
Anonymous wrote:Millennials are the worst parents ever.


Millenials made AI.
Anonymous
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:Ducking idiot parents!!!!!!


The tech lobbyists really are all over this, aren’t they?


Oh look, another Democrat creating a fake narrative that tech lobbyists are all over this when in reality it’s Joe Schmo in his parent’s basement. Joe knows bad parents give their children the internet that’s why he still lives in his basement.


You didn't read anything before commenting did you? 16 year old kids live with their parents. This 16 year old kid killed himself with the suggestions of ChatGPT
post reply Forum Index » Off-Topic
Message Quick Reply
Go to: