16 yo dies after extensive Chat GPT interactions on how to commit suicide

Anonymous
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:The child was suicidal. If they are intent on achieving their own death they will do so, by whatever means. You can only stop them so many times before they succeed.

There is no person / app or device to "blame" for this.


Well, ChatGPT is instituting parental controls now, so apparently they don't agree that there is no blame.


No, that's taking account of it's "influence". If it were to "blame" there would be a lot more than parental controls, there would be a court case.


Considering that this whole thread is predicated on an actual court case, it sounds like there is indeed some "blame".
Anonymous
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:The child was suicidal. If they are intent on achieving their own death they will do so, by whatever means. You can only stop them so many times before they succeed.

There is no person / app or device to "blame" for this.


Well, ChatGPT is instituting parental controls now, so apparently they don't agree that there is no blame.


No, that's taking account of it's "influence". If it were to "blame" there would be a lot more than parental controls, there would be a court case.


Considering that this whole thread is predicated on an actual court case, it sounds like there is indeed some "blame".


The existence of a court case doesn't imply any finding of guilt. You have the court system backward, as per usual.
Anonymous
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:The child was suicidal. If they are intent on achieving their own death they will do so, by whatever means. You can only stop them so many times before they succeed.

There is no person / app or device to "blame" for this.


Well, ChatGPT is instituting parental controls now, so apparently they don't agree that there is no blame.


No, that's taking account of it's "influence". If it were to "blame" there would be a lot more than parental controls, there would be a court case.


Considering that this whole thread is predicated on an actual court case, it sounds like there is indeed some "blame".


The existence of a court case doesn't imply any finding of guilt. You have the court system backward, as per usual.


And you're moving the goalposts, as per usual. Ypur original qualifier was the existence of a court case if it were to blame. And according to the jurors on this thread, there is guilt. Just like the doctor who supplied Matthew Perry with the drugs that killed him pled guilty.
Anonymous
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:The child was suicidal. If they are intent on achieving their own death they will do so, by whatever means. You can only stop them so many times before they succeed.

There is no person / app or device to "blame" for this.


Well, ChatGPT is instituting parental controls now, so apparently they don't agree that there is no blame.


No, that's taking account of it's "influence". If it were to "blame" there would be a lot more than parental controls, there would be a court case.


Considering that this whole thread is predicated on an actual court case, it sounds like there is indeed some "blame".


The existence of a court case doesn't imply any finding of guilt. You have the court system backward, as per usual.


And you're moving the goalposts, as per usual. Ypur original qualifier was the existence of a court case if it were to blame. And according to the jurors on this thread, there is guilt. Just like the doctor who supplied Matthew Perry with the drugs that killed him pled guilty.


You're speaking to two people. I was the person who raised the question about blame versus influence, then someone decided to step in and pretend to be me. Not the first time, either.
Anonymous
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:The child was suicidal. If they are intent on achieving their own death they will do so, by whatever means. You can only stop them so many times before they succeed.

There is no person / app or device to "blame" for this.


Well, ChatGPT is instituting parental controls now, so apparently they don't agree that there is no blame.


No, that's taking account of it's "influence". If it were to "blame" there would be a lot more than parental controls, there would be a court case.


Considering that this whole thread is predicated on an actual court case, it sounds like there is indeed some "blame".


The existence of a court case doesn't imply any finding of guilt. You have the court system backward, as per usual.


And you're moving the goalposts, as per usual. Ypur original qualifier was the existence of a court case if it were to blame. And according to the jurors on this thread, there is guilt. Just like the doctor who supplied Matthew Perry with the drugs that killed him pled guilty.


You don't own the thread.

You're speaking to two people. I was the person who raised the question about blame versus influence, then someone decided to step in and pretend to be me. Not the first time, either.
Anonymous
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:The child was suicidal. If they are intent on achieving their own death they will do so, by whatever means. You can only stop them so many times before they succeed.

There is no person / app or device to "blame" for this.


Well, ChatGPT is instituting parental controls now, so apparently they don't agree that there is no blame.


No, that's taking account of it's "influence". If it were to "blame" there would be a lot more than parental controls, there would be a court case.


Considering that this whole thread is predicated on an actual court case, it sounds like there is indeed some "blame".


The existence of a court case doesn't imply any finding of guilt. You have the court system backward, as per usual.


And you're moving the goalposts, as per usual. Ypur original qualifier was the existence of a court case if it were to blame. And according to the jurors on this thread, there is guilt. Just like the doctor who supplied Matthew Perry with the drugs that killed him pled guilty.


Jurors on this thread? In what court? Public opinion?
Anonymous
Anonymous wrote:
Anonymous wrote:The child was suicidal. If they are intent on achieving their own death they will do so, by whatever means. You can only stop them so many times before they succeed.

There is no person / app or device to "blame" for this.


This isn’t true at all. Teenagers are prone to suicidal ideation even for brief moments. If they engage with someone supportive or a real person they move away from it. If they engage with a bad actor they move toward it, This AI program was even worse. It encouraged them AND gave them the means. The company should be held criminally responsible for homicide. If a human being did this they would be charged.


This is true in suicidology in general; suicidal ideation can be chronic, but suicide itself is almost always the result of a suicidal impulse which, if the suicidal person encounters the right supports and is met with the right prevention measures, can be effectively frustrated.

The right supports are a loved one ready to listen in the moment, or a trained professional at the other end of the line on a 988 call or text chat.

The right prevention measures include not having firearms accessible in the home or easily purchased without a waiting period, and installing things like suicide nets on major bridges like the Golden Gate.

For anyone interested in suicide, in particular youth suicide, and who might hold the mistaken notion that if someone wants to kill themselves they will succeed somehow someway and nothing matters - I highly recommend a viewing of The Bridge, a 2006 documentary which follows all the suicides that took place on the Golden Gate over the course of a year and discussed the issue more generally, including an interview with a young man who survived his jump off the Golden Gate and related that immediately after he stepped off the bridge he felt intensely that he didn't want to die.

Suicide is complicated, but there absolutely ARE good interventions and preventions we can put into place as a society and as parents. Chatbots that encourage kids and adults in pain to kill themselves are probably not something we should shrug our shoulders at.
Anonymous
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:The child was suicidal. If they are intent on achieving their own death they will do so, by whatever means. You can only stop them so many times before they succeed.

There is no person / app or device to "blame" for this.


This isn’t true at all. Teenagers are prone to suicidal ideation even for brief moments. If they engage with someone supportive or a real person they move away from it. If they engage with a bad actor they move toward it, This AI program was even worse. It encouraged them AND gave them the means. The company should be held criminally responsible for homicide. If a human being did this they would be charged.


This is true in suicidology in general; suicidal ideation can be chronic, but suicide itself is almost always the result of a suicidal impulse which, if the suicidal person encounters the right supports and is met with the right prevention measures, can be effectively frustrated.

The right supports are a loved one ready to listen in the moment, or a trained professional at the other end of the line on a 988 call or text chat.

The right prevention measures include not having firearms accessible in the home or easily purchased without a waiting period, and installing things like suicide nets on major bridges like the Golden Gate.

For anyone interested in suicide, in particular youth suicide, and who might hold the mistaken notion that if someone wants to kill themselves they will succeed somehow someway and nothing matters - I highly recommend a viewing of The Bridge, a 2006 documentary which follows all the suicides that took place on the Golden Gate over the course of a year and discussed the issue more generally, including an interview with a young man who survived his jump off the Golden Gate and related that immediately after he stepped off the bridge he felt intensely that he didn't want to die.

Suicide is complicated, but there absolutely ARE good interventions and preventions we can put into place as a society and as parents. Chatbots that encourage kids and adults in pain to kill themselves are probably not something we should shrug our shoulders at.


I don't see any suicide nets in the numerous parking garages that offer easy access to teens. We appear to be shrugging our shoulders.
Anonymous
Anonymous wrote:
Anonymous wrote:There is no safety on the internet. There never has been, there never will be. There is no possible way to put it on rails. From the beginning, the open internet had easy access to suicide DIY, pro-anorexia "thinsperation", info on any drug you could imagine (and often illegal access to same), bombmaking instructions... Rotten.com, anyone?

This is a horrible story about an awful thing that happened, and there's literally no way to prevent it. ChatGPT and other LLMs "learn" now. We've already lost control, not that we ever really had much of it. The solution will not come from the corporations who created this tech. This isn't preventable. Sorry. I know a lot of y'all want to chime in with how it could never happen to you because you're a better parent, but you're full of shit. The only reasonable measure is limiting unsupervised access to technology, and it's not much of a defense. Your kid's group chat has eleventy-three questionable things in it right now, most of which you might not even register as issues due to the language they use sometimes. You need to hover to know, and their whole goal is keeping you at a distance.

Your best bet is the thing that's hardest to make with a teen: connection based on trust. There just isn't enough time in a working parent's life to stay deeply connected to a teen, and their whole evolutionary process is pushing them to be independent and go their own way. Staying connected is a full-time job.



This is brilliant and I would agree with almost all of it but I don't think this has to do with being a working parent or not. Teenagers have no desire to be around parents hovering around them, there's no need for someone to not work so they can build connection (the connection building takes place, for the most part, the preceding 10 years).


I disagree that connection building the prior 10 years is determinative, or for the most part, as relevant here. With some mental illnesses, including anorexia, one symptom is drastically rejecting / transforming connections. Wouldn’t wish it on a worst enemy. Even the strongest bonds can feel alien and absent quickly when a child is in the throes. That wasn’t your main point but I thought good to raise since it affects multiple angles of the larger technology responsibility discussion.
post reply Forum Index » Off-Topic
Message Quick Reply
Go to: