637
OpenAI says dead teen violated TOS when he used ChatGPT to plan suicide
(arstechnica.com)
We're not The Onion! Not affiliated with them in any way! Not operated by them in any way! All the news here is real!
Posts must be:
Please also avoid duplicates.
Comments and post content must abide by the server rules for Lemmy.world and generally abstain from trollish, bigoted, or otherwise disruptive behavior that makes this community less fun for everyone.
And that’s basically it!
Yeah the problem with LLMs is they’re far too easy to anthropomorphize. It’s just a word predictor, there is no “thinking” going on. It doesn’t “feel” or “lie”, it doesn’t “care” or “love”, it was just trained on text that had examples of conversations where characters did express those feelings; but it’s not going to statistically determine how those feelings work or when they are appropriate. All the math will tell it is “when input like this, output like this and this” with NO consideration to external factors that made those responses common in the training data.