this post was submitted on 09 Jan 2025
263 points (95.5% liked)
Technology
60324 readers
4129 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Reproducing identifiable chunks of copyrighted content in the LLM's output is copyright infringement, though, and that's what training on copyrighted material leads to. Of course, that's the other end of the process and it's a tort, not a crime, so yeah, you make a good point that the company's legal calculus could be different.
Thank you, I'm glad someone is sane ITT.
To further refine the point, do you know of any lawsuits that were ruled successfully on the basis that as you say - the company that made the LLM is responsible because someone could prompt it to reproduce identifiable chunks of copyright material? Which specific bills make it so?
Wouldn't it be like suing Seagate because I use their hard drives to pirate corpo media? I thought Sony Corp. of America v. Universal City Studios, Inc. would serve as the basis there and just like Betamax it'd be distribution of copyright material by an end user that would be problematic, rather than the potential of a product to be used for copyright infringement.
https://www.youtube.com/watch?v=uY9z2b85qcE
To be clear, I think it ought to be the case that at least "copyleft" GPL code can't be used to train an LLM without requiring that all output of the LLM become GPL (which, if said GPL training data were mixed with proprietary training data, would likely make the model legally unusable in total). AFAIK it's way too soon for there to be a precedent-setting court ruling about it, though.
In particular...
...I don't see how this has any relevancy at all, since the whole purpose of an LLM is to make new -- arguably derivative -- works on an industrial scale, not just single copies for personal use.