637
OpenAI says dead teen violated TOS when he used ChatGPT to plan suicide
(arstechnica.com)
We're not The Onion! Not affiliated with them in any way! Not operated by them in any way! All the news here is real!
Posts must be:
Please also avoid duplicates.
Comments and post content must abide by the server rules for Lemmy.world and generally abstain from trollish, bigoted, or otherwise disruptive behavior that makes this community less fun for everyone.
And that’s basically it!
ChatGPT was not designed to provide guidance to suicidal people. The real problem is an exploitative and cruel mental health industry that can lock up suicidal people in horrific locked facilities at huge profits while inflicting additional trauma. There is a reason many people will never call 988 or open up to a mental health clinician about suicidal feelings given how horrible and exploitative locked facilities are. This is not ChatGPT's fault, it's the fault of a greedy mental health industry trying to look good, by locking up the suicidal instead of engaging with them, while inflicting traumatic harm on patients.
It certainly should be designed for those type of queries though. At least, avoid discussing it.
Wouldn't ChatGPT be liable if someone planned a terror attack with it?
In the court document, it lays out how OpenAI developed the latest model to prioritize engagement. In this case, they had a system that was consistently flagging his conversations as high risk for harm, but it didn't have any safeguards to actually end the conversation like it does when requested to generate copyrighted material.
The complaint is ultimately saying that OpenAI should have implemented safeguards to stop the conversation when the system determined that it was high risk rather than allowing it to continue to give responses from the large language model.