this post was submitted on 03 Sep 2024
491 points (96.4% liked)

Microblog Memes

5375 readers
4831 users here now

A place to share screenshots of Microblog posts, whether from Mastodon, tumblr, ~~Twitter~~ X, KBin, Threads or elsewhere.

Created as an evolution of White People Twitter and other tweet-capture subreddits.

Rules:

  1. Please put at least one word relevant to the post in the post title.
  2. Be nice.
  3. No advertising, brand promotion or guerilla marketing.
  4. Posters are encouraged to link to the toot or tweet etc in the description of posts.

Related communities:

founded 1 year ago
MODERATORS
top 39 comments
sorted by: hot top controversial new old
[–] [email protected] 41 points 1 week ago

I can't believe this got released and this is still happening.

This is the revolution in search of RAG (retrieval augmented generation), where the top N results of a search get fed into an LLM and reprioritized.

The LLM doesn't understand it want to understand the current, it loads the content info it's content window and then queries the context window.

It's sensitive to initial prompts, base model biases, and whatever is in the article.

I'm surprised we don't see more prompt injection attacks on this feature.

[–] [email protected] 36 points 1 week ago (2 children)
[–] [email protected] 3 points 1 week ago

That's why you should always have your backpack when you board a plane (along with your towel, of course).

[–] [email protected] 2 points 1 week ago

Thanks! The OP is a mess (at least when using Boost).

[–] [email protected] 30 points 1 week ago (1 children)

At first I was at least impressed that it came up with such a hilarious idea, but then it of course turns out (just as with the pizza glue) it just stole it from somewhere else.

[–] [email protected] 16 points 1 week ago

"AI" as we know them are fundamentally incapable of coming up with something new, everything it spits out is a combination of someone else's work.

[–] [email protected] 26 points 1 week ago* (last edited 1 week ago)

The big problem with these AI is not that the technology is fundamentally flawed. It's that stupid humans just dump huge amounts of data at it without checking any of it for validity.

If you train a human on bullshit of course they're not going to know the difference between truth and lies, how could they possibly do that if all they're ever told is utter nonsense. I really would have thought better of people at Google, you'd have thought someone there would have had the intelligence to realize that not everything on the internet is solid gold.

This is why I've never really bothered that they were training on data from Reddit.

[–] [email protected] 17 points 1 week ago
[–] [email protected] 17 points 1 week ago (1 children)

I thought this was fake, but I searched Google for "parachute effectiveness" and that satirical study is at the top, and literally every single link below it is a reference to that satirical study. I have scrolled for a good minute and found no genuine answer...

[–] [email protected] 10 points 1 week ago* (last edited 1 week ago) (2 children)

I have scrolled for a good minute and found no genuine answer...

Presumably because the effectiveness of parachutes is pretty self-evident and no one has done a formal study on the subject because why would they.

It's not like they're going to find that parachutes actually aren't needed and the humans just float gently to the ground on their own. Or that a sufficiently tall top hat can be just as effective.

[–] [email protected] 7 points 1 week ago* (last edited 1 week ago) (1 children)

Parachute effectiveness is a very reasonable thing to study, it's pretty important to know how one parachute design performs compared to other designs and the obvious baseline is no parachute. A lot of things which appear to be self-evident have been extensively studied, generally you don't want to just assume you know how something works.

Though throwing people out of a plane at altitude with no parachute probably isn't the most ethical way to study parachute effectiveness.

[–] [email protected] 1 points 1 week ago (1 children)

I think we know enough about aerodynamics that we can probably simulate it in the computer if we really cared to. My point is more that it's probably never been studied at an academic level, I'm sure parachute manufacturers ans various militaries have studied all sorts of things. But none of that would have made it into a research paper.

[–] [email protected] 2 points 1 week ago (1 children)

Here's a study on cadavers to determine whether people have the same number of nose hairs in each nostril. In academia there is no such thing as too trivial.

There's plenty of studies on parachutes for spacecraft (eg, here's one on aerodynamics of parachutes for mars landing) so if you follow the references somewhere down the line you'll probably find studies on general parachute effectiveness.

[–] [email protected] 1 points 1 week ago

You have to search using language that papers might actually use though. "Parachute effectiveness" means what the satirical paper is exploring, whether it prevents death or not. The only serious studies that might have used that language would be old WW2 studies that threw people out of planes with different parachutes to see how many survived.

If you want to know how to design an effective parachute, you should be looking at reference books like Parachute Recovery Systems instead.

[–] [email protected] 2 points 1 week ago

This is a good point I didn't think of. It's possible that the answer is so obvious, that the only articles made about it would be jokes.

[–] [email protected] 16 points 1 week ago

Why waste money on parachutes when everyone has a backpack at home?!

[–] [email protected] 14 points 1 week ago (1 children)
[–] [email protected] 18 points 1 week ago (1 children)

Better yet, don't use Google for searching when possible.

[–] [email protected] 10 points 1 week ago

I just searched "do maple trees have a tap root", ai overview says yes. Literally all other reputable sources say no 🙄

[–] [email protected] 9 points 1 week ago

Representative study participant jumping from aircraft with an empty backpack. This individual did not incur death or major injury upon impact with the ground

[–] [email protected] 7 points 1 week ago (1 children)
[–] [email protected] 5 points 1 week ago

Hah! I just told someone the other day that LLMs trick people into seeing intelligence basically by cold reading! At last, sweet validation!

[–] [email protected] 4 points 1 week ago (1 children)

At least in the actual Gemini chat when asked about parachute study effectiveness correctly notes this study as satire.

[–] [email protected] 3 points 1 week ago (1 children)

Weird that it actually made note of that but didn't put it in the summary

[–] [email protected] 1 points 1 week ago

It's because it has no idea what is important or isn't important. It's all just information to it and it all carries the same weight, whether it's "parachutes aren't any more effective than empty backpacks" or "this study is a satire making fun of other studies that extrapolate information carefully". Even though that second bit essentially negates the first bit.

Bet the authors weren't expecting their joke study to hit a second time like this, demonstrating that AI is just as bad at extrapolating since it extrapolated true information from false.

It's reckless to use these AIs in searches. If someone jokes about pretending to be a doctor and suggesting a stick of butter being the best treatment for a heart attack and that joke makes it into the training set, how would an AI have any idea that it doesn't have the same value as some doctors talking about patterns they've noticed in heart attack patients?

[–] [email protected] -5 points 1 week ago

LLMs indeed have a way of detecting satire. The same way humans do.

It’s just that now they’re like the equivalent of five year olds in terms of their ability to detect sarcasm.