[-] [email protected] 4 points 1 week ago

So, the judge says:

In cases involving uses like Meta’s, it seems like the plaintiffs will often win, at least where those cases have better-developed records on the market effects of the defendant’s use.

And what is that supposed to ever look like? Do authors need a better developed record of effects of movies on book sales, to get paid for movie adaptations, too?

[-] [email protected] 5 points 2 weeks ago* (last edited 2 weeks ago)

Its not about moats, it's about open source community (whose code had been trained on) coming out with pitchforks. It has nothing to do with moats.

You are way overselling coding agents.

Re-creating some open source project with a similar function is literally the only way a coding agent can pretend to be a programmer.

I tried latest models for code and they are in fact capable of shitting out a thousand lines of working code at a time, which obviously can only be obtained via plagiarism since they are also incapable of writing the most trivial code for a novel situation. And the neat thing about plagiarism is that once you start you can keep going since there's more of compatible code where it came from.

[-] [email protected] 5 points 2 weeks ago

That is not equivalent, though; other solutions to "can not be left unattended" exist; just ask Kristi Noem.

[-] [email protected] 5 points 2 weeks ago* (last edited 2 weeks ago)

Other funny thing: it only became a fully automatic plagiarism machine when it claimed that it wrote the code (referring to itself by name which is a dead giveaway that the system prompt makes it do that).

I wonder if code is where they will ultimately get nailed to the wall for willful copyright infringement. Code is too brittle for their standard approach, "we sort of blurred a lot of works together so its ours now, transformative use, fuck you, prove that you don't just blur other people's work together, huh?".

But also for a piece of code, you can very easily test if the code has the same "meaning" - you can implement a parser that converts code to an expression graph, and then compare that. Which makes it far easier to output code that is functionally identical to the code they are plagiarizing, but looks very different.

But also I estimate approximately 0% probability that the assholes working on that wouldn't have banter between themselves about copyright laundering.

edit: Another thing is that since it can have no own conception of what "correct" behavior is for a piece of code being plagiarized, it would also plagiarize all the security exploits.

This hasn't been a big problem for the industry, because only short snippets were being cut and pasted (how to make some stupid API call, etc), but with generative AI whole implementations are going to get plagiarized wholesale.

Unlike any other work, code comes with its own built in, essentially irremovable "watermark" in the form of security exploits. In several thousands lines of code, there would be enough "watermark" for identification.

[-] [email protected] 5 points 2 weeks ago

Well, it did reach for "I double checked it, I'm totally sure now" language.

From the perspective of trying to convince the top brass that they are making good progress towards creating an artificial psychopath - not just an artificial human - it's pretty good.

[-] [email protected] 5 points 2 weeks ago* (last edited 2 weeks ago)

Still seems terminally AI pilled to me, an iteration or two later. "5 digit multiplication is borderline", how is that useful?

I think there's a combination of it being a pinnacle of billions and billions of dollars, and probably theirs firing people for slightest signs of AI skepticism. There's another data point, "reasoning math & code" is released as stable by Google without anyone checking if it can do any kind of math.

edit: imagine that a calculator manufacturer in 1970s is so excited about microprocessors they release an advanced scientific calculator that can't multiply two 6 digit numbers (while their earlier discrete component model could). Outside the crypto sphere, that sort of insanity is new.

[-] [email protected] 4 points 3 weeks ago

Jesus Christ on a stick, thats some trice cursed shit.

Maybe susceptibility runs in families, culturally. Religion does, for one thing.

[-] [email protected] 5 points 3 weeks ago* (last edited 3 weeks ago)

I think this may also be a specific low-level exploit, whereby humans are already biased to mentally "model" anything as having an agency (see all the sentient gods that humans invented for natural phenomena).

I was talking to an AI booster (ewww) in another place and I think they really are predominantly laymen brain fried by this shit. That particular one posted a convo where out of 4 arithmetic operations, 2 were "12042342 can be written as 120423 + 19, and 43542341 as 435423 + 18" combined with AI word-salad, and he was expecting that this would be convincing.

It's not that this particular person thinks its genius, he thinks that it is not a mere computer, and the way it is completely shit at math only serves to prove it to them that it is not a mere computer.

edit: And of course they care not for any mechanistic explanations, because all of those imply LLMs are not sentient, and they believe LLMs are sentient. The "this isn't it but one day some very different system will" counter argument doesn't help either.

[-] [email protected] 5 points 3 weeks ago

Yeah I think it is almost undeniable chatbots trigger some low level brain thing. Eliza has 27% Turing Test pass rate. And long before that, humans attributed weather and random events to sentient gods.

This makes me think of Langford’s original BLIT short story.

And also of rove beetles that parasitize ant hives. These bugs are not ants but they pass the Turing test for ants - they tap the antennae with an ant and the handshake is correct and they are identified as ants from this colony and not unrelated bugs or ants from another colony.

[-] [email protected] 5 points 2 months ago

I seriously doubt he ever worked anywhere like that, not to mention that he’s too spineless to actually get in trouble IRL.

[-] [email protected] 5 points 2 months ago

I think they worked specifically on cheating the benchmarks, though. As well as popular puzzles like pre existing variants of the river crossing - it is a very large puzzle category, very popular, if the river crossing puzzle is not on the list I don't know what would be.

Keep in mind that they are also true believers, too - they think that if they cram enough little pieces of logical reasoning, taken from puzzles, into the AI, then they will get robot god that will actually start coming up with new shit.

I very much doubt that there's some general reasoning performance improvement that results in these older puzzle variants getting solved, while new ones that aren't particularly more difficult, fail.

[-] [email protected] 5 points 1 year ago

Well I'll be damned, they are trying to get it to work as if it had internal state, by augmenting their training dataset with some auto-generated crap. Of course, if machine learning wasn't shit at recurrence they wouldn't be using transformer architecture, so it doesn't work.

I got the following from chatgpt:

Janet’s ducks lay 3 eggs per day. Each day, she transports one duck and all the eggs to the other side of the river using a boat that can hold her and one other item (either a duck or an egg). How does she do that? ChatGPT

Janet can transport the ducks and eggs across the river using the following strategy:

Day 1 (Duck A, Duck B, Duck C on side A; Egg 1, Egg 2, Egg 3 on side A):
    Janet takes Duck A across the river.

Day 2 (Duck B, Duck C on side A; Egg 1, Egg 2, Egg 3 on side A):
    Janet takes Egg 1 across the river.

Day 3 (Duck B, Duck C on side A; Egg 2, Egg 3 on side A; Duck A on side B):
    Janet takes Duck A back to side A.

Day 4 (Duck C on side A; Egg 2, Egg 3 on side A; Duck A on side A; Egg 1 on side B):
    Janet takes Duck B across the river.

Day 5 (Duck C on side A; Egg 2, Egg 3 on side A; Duck A on side A; Egg 1, Duck B on side B):
    Janet takes Egg 1 back to side A.

Day 6 (Duck C on side A; Egg 2, Egg 3 on side A; Duck A, Egg 1 on side A; Duck B on side B):
    Janet takes Duck A across the river again.

Day 7 (Egg 2, Egg 3 on side A; Duck A, Egg 1 on side A; Duck B on side B; Duck C on side A):
    Janet takes Duck C across the river.

Day 8 (Egg 2, Egg 3 on side A; Duck A, Egg 1, Duck C on side A; Duck B on side B):
    Janet takes Egg 2 across the river.

Day 9 (Egg 3 on side A; Duck A, Egg 1, Duck C on side A; Duck B on side B; Egg 2 on side B):
    Janet takes Duck A back to side A.

Day 10 (Egg 3 on side A; Duck C on side A; Duck A on side A; Duck B, Egg 1, Egg 2 on side B):
    Janet takes Duck B across the river.

Day 11 (Egg 3 on side A; Duck C on side A; Duck A, Duck B on side A; Egg 1, Egg 2 on side B):
    Janet takes Egg 1 back to side A.

Day 12 (Egg 3 on side A; Duck C on side A; Duck A, Egg 1, Duck B on side A; Egg 2 on side B):
    Janet takes Duck A across the river again.

Day 13 (Duck C on side A; Duck A, Egg 1, Duck B on side A; Egg 2 on side B; Egg 3 on side B):
    Janet takes Duck C across the river.

By following this sequence, Janet successfully transports all 3 ducks and all 3 eggs across the river in 13 trips. Each trip involves either transporting one duck or one egg, adhering to the constraint that Janet can only carry one item (duck or egg) with her in the boat.

view more: ‹ prev next ›

diz

0 post score
0 comment score
joined 2 years ago