this post was submitted on 18 Jun 2024
94 points (100.0% liked)
TechTakes
1489 readers
87 users here now
Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.
This is not debate club. Unless it’s amusing debate.
For actually-good tech, you want our NotAwfulTech community
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I’ll see people responding to fucken lemmy comments with “i ran the question through gpt and...” like what the fuck?
It’s literally the same thing as saying “I asked some RANDOM dude and this is what he said. Also I have no reason to believe he’s even the slightest bit educated.”
If you really wanna just throw some fucking spaghetti at the wall, YOU CAN DO THAT WITHOUT AI.
This is coming from someone who hates google, but if this person’s entire family had died, I would put a LOT of that blame on them before google.
That would really put the "uh oh" in your spaghettios
Someone sell this commercial.
Spaghetti-O's! Pick up a can and feed your family, because AI might have told you to make botulism.
i have found I get .000000000006% less hallucination rate by throwing alphabet soup at the wall instead of spaghett, my preprint is on arXiV
I applaud your optimism that most people can do this without AI but have you gone and met people? Most people are not that capable of producing torrents of shameless bullshit as conscience or awareness of social and/or professional costs rear their head at some point.
If they can't do it themselves then they have no idea if the output is good. If they want to run it through the bullshit machine they shouldn't post the output unless they know it is accurate.
And once they realise it, lives will be saved.
Can they, though? Sure, in theory Google could hire millions of people to write overviews that are equally idiotic, but obviously that is not something they would actually do.
I think there's an underlying ethical theory at play here, which goes something like: it is fine to fill internet with half-plagiarized nonsense, as long as nobody dies, or at least, as long as Google can't be culpable.
The millions of people writing overviews would definitely be more reliable, that's for sure. For one thing, they understand the concept of facts.