[-] [email protected] 9 points 2 weeks ago

lmao: they have fixed this issue, it seems to always run python now. Got to love how they just put this shit in production as "stable" Gemini 2.5 pro with that idiotic multiplication thing that everyone knows about, and expect what? to Eliza Effect people into marrying Gemini 2.5 pro?

[-] [email protected] 9 points 2 weeks ago* (last edited 2 weeks ago)

I think it gotten to the point where its about as helpful to point out it is just an autocomplete bot, as it is to point out that "its just the rotor blades chopping sunlight" when a helicopter pilot is impaired by flicker vertigo and is gonna crash. Or in the world of BLIT short story, that its just some ink on a wall.

Human nervous system is incredibly robust, comparing to software, or comparing to its counterpart in the fictional world in BLIT, or comparing to shrimps mesmerized by cuttlefish.

And yet it has exploitable failure modes, and a corporation that is optimizing an LLM for various KPIs is a malign intelligence that is searching for a way to hack brains, this time with much better automated tooling and with a very large budget. One may even say a super-intelligence since it is throwing the combined efforts of many at the problem.

edit: that is to say there certainly is something weird going on on psychological level ever since Eliza.

Yudkowsky is a dumbass layman posing as an expert, and he's playing up his own old pre-conceived bullshit. But if he can get some of his audience away from the danger - even if he attributes a good chunk of the malevolence to a dumb ass autocomplete to do so, that is not too terrible of a thing.

[-] [email protected] 9 points 1 month ago* (last edited 1 month ago)

I swear I’m gonna plug an LLM into a rather traditional solver I’m writing. I may tuck deep into the paper a point how it’s quite slow to use an LLM to mutate solutions in a genetic algorithm or a swarm solver. And in any case non LLM would be default.

Normally I wouldn’t sink that low but I got mouths to feed, and frankly, fuck it, they can persist in this madness for much longer than I can stay solvent.

This is as if there was a mass delusion that a pseudorandom number generator can serve as an oracle, predicting the future. Doing any kind of Monte Carlo simulation of something like weather in that world would of course confirm all the dumb shit.

[-] [email protected] 9 points 1 month ago

I wonder what's gonna happen first, the bubble popping or Yudkowsky getting so fed up with gen AI he starts sneering.

[-] [email protected] 8 points 2 months ago* (last edited 2 months ago)

He’s such a complete moron. He doesn’t want to recite “DEI shibboleths”? What does he even think that would refer to? Why shibboleths?

To spell it out, that would refer to an antisemitic theory that the reason (for example) some black guy would get a medal of honor (the “deimedal”) is because of the jews.

I swear this guy is dumber than Trump. Trump for all his rambling, uses actual language - Trump understands what the shit he is saying means to his followers. Scott… he really does not.

[-] [email protected] 8 points 2 months ago* (last edited 2 months ago)

And it is Google we're talking about, lol. If no one uses their AI shit they just replace something people use with it (also see search).

[-] [email protected] 8 points 2 months ago* (last edited 2 months ago)

I just describe it as "computer scientology, nowhere near as successful as the original".

The other thing is that he's a Thiel project, different but not any more sane than Curtis Yarvin aka Moldbug. So if they heard of moldbug's political theories (which increasingly many people heard about because of, well, them being enacted) it's easy to give a general picture of total fucking insanity funded by thiel money. It doesn't really matter what the particular insanity is, and it matters even less now as the AGI shit hit mainstream entirely bypassing anything Yudkowsky had to say on the subject.

[-] [email protected] 9 points 3 months ago* (last edited 3 months ago)

Yeah, exactly. There's no trick to it at all, unlike the original puzzle.

I also tested OpenAI's offerings a few months back with similarly nonsensical results: https://awful.systems/post/1769506

All-vegetables no duck variant is solved correctly now, but I doubt it is due to improved reasoning as such, I think they may have augmented the training data with some variants of the river crossing. The river crossing is one of the top most known puzzles, and various people have been posting hilarious bot failures with variants of it. So it wouldn't be unexpected that their training data augmentation has river crossing variants.

Of course, there's very many ways in which the puzzle can be modified, and their augmentation would only cover obvious stuff like variation on what items can be left with what items or spots on the boat.

[-] [email protected] 8 points 8 months ago* (last edited 8 months ago)

Nobel prize in Physics for attempting to use physics in AI but it didn't really work very well and then one of the guys working on a better more pure mathematics approach that actually worked and got the Turing Award for the latter, but that's not what the prize is for, while the other guy did some other work, but that is not what the prize is for. AI will solve all physics!!!111

[-] [email protected] 8 points 1 year ago

Perhaps it was near ready to emit a stop token after "the robot can take all 4 vegetables in one trip if it is allowed to carry all of them at once." but "However" won, and then after "However" it had to say something else because that's how "however" works...

Agreed on the style being absolutely nauseating. It wasn't a very good style when humans were using it, but now it is just the style of absolute bottom of the barrel, top of the search results garbage.

[-] [email protected] 9 points 1 year ago* (last edited 1 year ago)

I think you can make a slight improvement to Wolfram Alpha: using an LLM to translate natural language queries into queries WA can consume, then feeding them into WA. WA always reports exactly what it computed, so if it "misunderstands" you, it's a lot easier to notice.

The problem here is that AI boys got themselves hyped up for it being actually intelligent, so none of them would ever settle for some modest application of LLMs. Google fired the authors of "stochastic parrot" paper, AFAIK.

simply pasting LLM output into CAS input and then the CAS output back into LLM input (which, let’s be honest, is the first thing tech bros will try as it doesn’t require much basic research improvement), will not help that much and will likely generate an entirely new breed of hilarious errors and bullshit (I like the term bullshit instead of hallucination, it captures the connotation errors are of a kind with the normal output).

Yeah I have examples of that as well. I asked GPT4 at work to calculate the volume of 10cm long, 0.1mm diameter wire. It seems to be doing correct arithmetic by some mysterious means which do not use scientific notation, and then the LLM can not actually count so it miscounts zeroes and outputs a result that is 1000x larger than the correct answer.

[-] [email protected] 9 points 1 year ago

GPT4 supposedly (it says that it is GPT4). I have access to one that is cleared for somewhat sensitive data, so presumably my queries aren't getting flagged and human reviewed by OpenAI.

view more: ‹ prev next ›

diz

0 post score
0 comment score
joined 2 years ago