72
submitted 1 day ago by yoasif@fedia.io to c/fuck_ai@lemmy.world

TL;DR: The advent of AI based, LLM coding applications like Anthropic’s Claude and ChatGPT have prompted maintainers to experiment with integrating LLM contributions into open source codebases.

This is a fast path to open source irrelevancy, since the US copyright office has deemed LLM outputs to be uncopyrightable. This means that as more uncopyrightable LLM outputs are integrated into nominally open source codebases, value leaks out of the project, since the open source licences are not operative on public domain code.

That means that the public domain, AI generated code can be reused without attribution, and in the case of copyleft licences - can even be used in closed source projects.

you are viewing a single comment's thread
view the rest of the comments
[-] themoken@startrek.website 4 points 1 day ago

That seems pretty good to me? I hate LLMs, but this policy is basically "if it's obviously LLM garbage or you don't understand it, it will be rejected" and I'm not sure it's practical to do better.

People will use LLMs behind the scenes, but if they are able to write a coherent justification with clear understanding of the code, receive feedback from devs and rework it, as well as submitting code that is well structured etc. it's not really any different than any other PR.

[-] yoasif@fedia.io 2 points 1 day ago

Except for the fact that it is public domain and not protected by the open source license that the code is ostensibly submitted under.

[-] themoken@startrek.website 3 points 1 day ago

How are the devs or anyone else supposed to tell that though, if all the LLM trappings are absent?

[-] uuj8za@piefed.social 1 points 1 day ago

That seems pretty good to me? I hate LLMs, but this policy is basically “if it’s obviously LLM garbage or you don’t understand it, it will be rejected” and I’m not sure it’s practical to do better.

You did not read/watch the content of this post.

[-] themoken@startrek.website 1 points 1 day ago

I read the post, but as I mentioned elsewhere, how are devs (or malicious commercial thieves looking for public domain code) supposed to detect this code is an LLM creation when all of the obvious signs they mention are stripped?

A ban on people using an LLM in secret is unenforceable and the code output can be indistinguishable from a human's, especially when a real human that understands the change is there to baby it and write commit messages etc.

this post was submitted on 08 Apr 2026
72 points (97.4% liked)

Fuck AI

6682 readers
2559 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

AI, in this case, refers to LLMs, GPT technology, and anything listed as "AI" meant to increase market valuations.

founded 2 years ago
MODERATORS