[-] [email protected] 6 points 2 weeks ago

Yeah, that's a great example.

The other thing is that unlike art, source code is already made to be consumed by a machine. It is not any more transformative to convert source code to equivalent source code, than it is to re-encode a video.

The only thing they do that is "transformative" is using source code not for compiling it but for defrauding the investors.

[-] [email protected] 6 points 2 weeks ago* (last edited 2 weeks ago)

Also, I just noticed something really fucking funny:

(arrows are for the sake of people like llllll...)

[-] [email protected] 6 points 2 weeks ago* (last edited 2 weeks ago)

there was a directive that if it were asked a math question that you can’t do in your brain or some very similar language it should forward it to the calculator module.

The craziest thing about leaked prompts is that they reveal the developers of these tools to be complete AI pilled morons. How in the fuck would it know if it can or can't do it "in its brain" lol.

edit: and of course, simultaneously, their equally idiotic fanboys go "how stupid of you to expect it to use a calculating tool when it said it used a calculating tool" any time you have some concrete demonstration of it sucking ass, while simultaneously the same kind of people are lauding the genius of system prompts half of which are asking it to meta-reason.

[-] [email protected] 7 points 2 weeks ago

I think I figured it out.

He fed his post to AI and asked it to list the fictional universes he’d want to live in, and that’s how he got Dune. Precisely the information he needed, just as his post describes.

[-] [email protected] 6 points 2 months ago* (last edited 2 months ago)

It re consumes its own bullshit, and the bullshit it does print is the bullshit it also fed itself, its not lying about that. Of course, it is also always re consuming the initial prompt too so the end bullshit isn’t necessarily quite as far removed from the question as the length would indicate.

Where it gets deceptive is when it knows an answer to the problem, but it constructs some bullshit for the purpose of making you believe that it solved the problem on its own. The only way to tell the difference is to ask it something simpler that it doesn’t know the answer to, and watch it bullshit in circles or to an incorrect answer.

[-] [email protected] 6 points 3 months ago* (last edited 3 months ago)

Yeah it really is fascinating. It follows some sort of recipe to try to solve the problem, like it's trained to work a bit like an automatic algebra system.

I think they had employed a lot of people to write generators of variants of select common logical puzzles, e.g. river crossings with varying boat capacities and constraints, generating both the puzzle and the corresponding step by step solution with "reasoning" and re-printing of the state of the items on every step and all that.

It seems to me that their thinking is that successive parroting can amount to reasoning, if its parroting well enough. I don't think it can. They have this one-path approach, where it just tries doing steps and representing state, just always trying the same thing.

What they need for this problem is to take a different kind of step, reduction (the duck can not be left unsupervised -> the duck must be taken with me on every trip -> rewrite a problem without the duck and with 1 less boat capacity -> solve -> rewrite the solution with "take the duck with you" on every trip).

But if they add this, then there's two possible paths it can take on every step, and this thing is far too slow to brute force the right one. They may get it to solve my duck variant, but at the expense of making it fail a lot of other variants.

The other problem is that even seemingly most elementary reasoning involves very many applications of basic axioms. This is what doomed symbol manipulation "AI" in the past and this is what is dooming it now.

[-] [email protected] 6 points 3 months ago* (last edited 3 months ago)

It’s a failure mode that comes from pattern matching without actual reasoning.

Exactly. Also looking at its chain-of-wordvomit (which apparently I can't share other than by cut and pasting it somewhere), I don't think this is the same as GPT 4 overfitting to the original river crossing and always bringing items back needlessly.

Note also that in one example it discusses moving the duck and another item across the river (so "up to two other items" works); it is not ignoring the prompt, and it isn't even trying to bring anything back. And its answer (calling it impossible) has nothing to do with the original.

In the other one it does bring items back, it tries different orders, even finds an order that actually works (with two unnecessary moves), but because it isn't an AI fanboy reading tea leaves, it still gives out the wrong answer.

Here's the full logs:

https://pastebin.com/HQUExXkX

Content warning: AI wordvomit which is so bad it is folded hidden in a google tool.

[-] [email protected] 6 points 8 months ago

Well the OP talks about a fridge.

I think if anything it's even worse for tiny things with tiny screws.

What kind of floating hologram is there gonna be that's of any use, for something that has no schematic and the closest you have to a repair manual is some guy filming themselves taking apart some related product once?

It looks cool in a movie because it's a 20 second clip in which one connector gets plugged, and tens of person hours were spent on it by very talented people who know how to set up a scene that looks good and not just visually noisy.

[-] [email protected] 6 points 8 months ago

Exactly. It goes something like "remember when you were fixing a washing machine and you didn't know what some part was and there was no good guide for fixing it, no schematic, no nothing? Wouldn't it be awesome if 100x of the work that wasn't put into making documentation was not put into making VR overlays?

[-] [email protected] 7 points 1 year ago

Frigging exactly. Its a dumb ass dead end that is fundamentally incapable of doing vast majority of things ascribed to it.

They keep imagining that it would actually learn some underlying logic from a lot of text. All it can do is store a bunch of applications of said logic, as in a giant table. Deducing underlying rules instead of simply memorizing particular instances of rules, that's a form of compression, there wasn't much compression going on and now that the models are so over-parametrized, even less.

[-] [email protected] 6 points 1 year ago* (last edited 1 year ago)

I tried the same prompt a lot of times and saw "chain of thought" attempts complete with the state modeling... they must be augmenting the training dataset with some sort of script generated crap.

I have to say those are so far the absolute worst attempts.

Day 16 (Egg 3 on side A; Duck 1, Duck 2, Egg 1, Egg 2 on side B): Janet takes Egg 3 across the river.

"Now, all 2 ducks and 3 eggs are safely transported across the river in 16 trips."

I kind of feel that this undermines the whole point of using transformer architecture instead of a recurrent neural network. Machine learning sucks at recurrence.

[-] [email protected] 6 points 1 year ago

Well the problem is it not having any reasoning period.

Not clear what symbolic reasoning would entail, but puzzles generally require you to think through several approaches to solve them, too. That requires a world model, a search, etc. the kind of stuff that actual AIs, even a tik tac toe AI, have, but LLMs don't.

On top of it this all works through machine learning, which produces the resulting network weights through very gradual improvement at next word prediction, tiny step by tiny step. Even if some sort of discrete model (like say the account of what's on either side of the river) could help it predict the next token, there isn't a tiny fraction of a discrete "model" that would help it, and so it simply does not go down that path at all.

view more: ‹ prev next ›

diz

0 post score
0 comment score
joined 2 years ago