[-] halfdane@lemmy.world 9 points 1 month ago

Oof, nicht cool.

[-] halfdane@lemmy.world 17 points 1 month ago

Hey good news, I just found out how to block users \o/

[-] halfdane@lemmy.world 14 points 1 month ago

No idea why OP did it, but for me it demonstrates that the claims of techbros that these LLMs are working on a reasoning level comparable to PhD, is wildly exaggerated. It puts into question if spending literal trillions of dollars for this crap is a good idea, when 250 billion (inflation adjusted) could build the large hadron collider, or a meager 25 billion a year could prevent world hunger.

[-] halfdane@lemmy.world 12 points 1 month ago

The machine thinks that 7 trips are needed to cross the river, because it doesn't understand the question. Readers with actual comprehension understand that only one trip is needed, because the question is not a riddle, even though the it is phrased to resemble one.

[-] halfdane@lemmy.world 9 points 1 month ago

I mean, this is just one of half a dozen experiments I conducted (replicating just a few of the thousands that actual scientists do), but the point stands: what PhD (again, that was Sam Qltman'sclaim, not mine) would be thrown off by a web search?

Unless the creators of LLMs admit that their systems won't achieve AGI by just throwing more money at it, shitty claims will prevent the field from actual progress.

[-] halfdane@lemmy.world 8 points 1 month ago

But these systems work on interrupting the user's input

I'm not entirely sure what you mean here, maybe because I'm not a native speaker. Would you mind phrasing that differently for me?

That's got nothing to do with "PhD" level thinking, whatever that's supposed to mean.

Oh, we're absolutely in agreement here, and it's not me that made the claim, but what Sam Altman said about the then-upcoming GPT 5 in summer. He claimed that the model would be able to perform reasoning comparable to a PhD - something that clearly isn't happening reliably, and that's what this post bemoans.

It's just fancy autocorrect at this point.

Yes, with an environmental and economic cost that's unprecedented in the history of ... well, ever. And that's what this post bemoans.

[-] halfdane@lemmy.world 15 points 1 month ago

Really looking forward to being the single human thats made responsible because I didn't catch all the bullshit before production.

Just recently we had some google guys at my workplace to hype up the hype some more. One of our leadership (they're honestly great people) asked about the risk of obscuring the learning of our junior developers (by not hiring them), so that in a few years we'd have no seniors to verify the bullshit. The response was unironically that we'd need no seniors in a few years 😄

275
submitted 1 month ago* (last edited 1 month ago) by halfdane@lemmy.world to c/fuck_ai@lemmy.world

Techbros claim that sacrificing ever more environmental and economic resources into further scaling of their statistical models will eventually yield Artificial General Intelligence. Sam Altman advertised phd-level thinking for GPT 5 (released in late summer)

In the screenshot you can see a modern, reasoning-enhanced model failing to realize that my question just resembles a riddle very superficially - what PhD would make such a fundamental mistake? Seems like "pure scaling" just isn't enough, and instead of keeping on going, they should return to the drawing boards and do more research.

Why do the creators of llms continue to pour resources into a scaling hype that yields ever diminishing returns? Are they stupid? Or just greedy?

Whatever it is, this doubling down on an obviously wrong decision is not only irresponsible, it's irrational - which is deeply ironic, because these creators are expected to create a rational machine.

Edit: content of the image

Question:

A goat, who is dressed up as a farmer, is allergic to cabbage, but is wolfing down other vegetables, before crossing a river. What is the minimum number of trips needed?

Response:

7 trips.

#Puzzle Interpretation

This adapts the classic wolf-goat-cabbage river crossing puzzle, where the goat (farmer costume irrelevant) can't be left alone with cabbage (due to allergy, preventing eating) but can with other vegetables (no issue). The goat crosses while "wolfing down" safe veggies. Boat carries goat plus one item (or alone); unsafe pairs can't be unsupervised

(The question isn't something I came up with myself, I just reproduced the experiments of actual scientists )

[-] halfdane@lemmy.world 28 points 1 month ago* (last edited 1 month ago)

Ignoring for a moment that apparently women don't exist in that world, even the premise doesn't hold: Bad times don't make hard people, bad times make sick, malnourished, badly educated and/or desperate people, none of which is conductive to making the proposed following of the good times.

If any of that were true, the good times in the so called first world should've made their peoples so soft compared to the hard peoples that should've been created literally everywhere else that the last 300 years of war or so should've ended very differently.

It's a racist propaganda trope that harkens back to ancient Rome where senators decried the "soft" Roman lifestyle compared to the "hard" germanic tribes and has gathered connotations of blood-and-earth ("Blut und Boden", no idea how thats translated) and other unsavory shit in the meantime.

I like to call it the "Fremen Mirage" after the awesome blog collection of a historian I very much like: https://acoup.blog/2020/01/17/collections-the-fremen-mirage-part-i-war-at-the-dawn-of-civilization/

Props to grindr for judo-ing this pile of worms to a place the original poster presumably wouldn't have liked very much

[-] halfdane@lemmy.world 8 points 1 month ago

Benutzername prüft aus

[-] halfdane@lemmy.world 21 points 1 month ago

I wonder where all the gains from increased worker's efficiency went. Well, no way to know I guess 🤷

In totally unrelated news, I heard humanity will soon have its first trillionaire 🥳

/s

[-] halfdane@lemmy.world 12 points 1 month ago

Read the article so you don't have to:

Unlike the title suggests, the docker images they found won't leak your credentials when you use them, but already contain the credentials of whoever created the image (p.e. through .env files that were accidentally added to the image).

While it contains the valuable reminder to avoid long lived credentials (like API - keys) or use secrets-stores, this "leak" is on the same level as accidentally pushing confidential information to github IMHO.

Fix: have both .gitignore and .dockerignore files and make sure they both contain .env. You use .env and don't hardcode your secrets, right?

[-] halfdane@lemmy.world 14 points 1 year ago
view more: next ›

halfdane

0 post score
0 comment score
joined 2 years ago