this post was submitted on 03 Sep 2024
491 points (96.4% liked)
Microblog Memes
5714 readers
3927 users here now
A place to share screenshots of Microblog posts, whether from Mastodon, tumblr, ~~Twitter~~ X, KBin, Threads or elsewhere.
Created as an evolution of White People Twitter and other tweet-capture subreddits.
Rules:
- Please put at least one word relevant to the post in the post title.
- Be nice.
- No advertising, brand promotion or guerilla marketing.
- Posters are encouraged to link to the toot or tweet etc in the description of posts.
Related communities:
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Weird that it actually made note of that but didn't put it in the summary
It's because it has no idea what is important or isn't important. It's all just information to it and it all carries the same weight, whether it's "parachutes aren't any more effective than empty backpacks" or "this study is a satire making fun of other studies that extrapolate information carefully". Even though that second bit essentially negates the first bit.
Bet the authors weren't expecting their joke study to hit a second time like this, demonstrating that AI is just as bad at extrapolating since it extrapolated true information from false.
It's reckless to use these AIs in searches. If someone jokes about pretending to be a doctor and suggesting a stick of butter being the best treatment for a heart attack and that joke makes it into the training set, how would an AI have any idea that it doesn't have the same value as some doctors talking about patterns they've noticed in heart attack patients?