637
OpenAI says dead teen violated TOS when he used ChatGPT to plan suicide
(arstechnica.com)
We're not The Onion! Not affiliated with them in any way! Not operated by them in any way! All the news here is real!
Posts must be:
Please also avoid duplicates.
Comments and post content must abide by the server rules for Lemmy.world and generally abstain from trollish, bigoted, or otherwise disruptive behavior that makes this community less fun for everyone.
And that’s basically it!
Systematic reviews bear out the ineffectiveness of crisis hotlines, so the reason they're popularly touted in media isn't for effectiveness. It's so people can feel "virtuous" & "caring" with their superficial gestures, then think no further of it. Plenty of people who've attempted suicide scorn the heightened "awareness" & "sensitivity" of recent years as hollow virtue signaling.
Despite the expertly honed superficiality on here, chatgpt is not about to dissuade anyone to back out of their plans to commit suicide. It's not human, and if it tried, it'd probably piss people off who'll turn to more old-fashioned web searches & research. People are entitled to look up information: we live in a free society.
If someone really wants to kill themselves, I think that's ultimately their choice, and we should respect it & be grateful.
You're staying at an involuntary hotel with room & board, medication, & 24-hour professional monitoring: shit's going to cost. It's absolutely not worth it unless it's a true emergency. Once the emergency passes, they try to release you to outpatient services.
The psychiatric professionals I've met take their jobs quite seriously & aren't trying to cheat anyone. Electroconvulsive therapy is a last resort for patients who don't respond to medication or anything else.
I used to be suicidal. I am grateful I never succeeded. You are a monster if you think we should just let people kill themselves.