this post was submitted on 29 Nov 2023
434 points (97.4% liked)
Technology
59197 readers
2953 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
The important distinction is that this "database" would be the training data, which it only has access to during training. It does not have access once it is actually deployed and running.
It is easy to think of it like a human taking a test. You are allowed to read your textbooks as much as you want while you study, but once you actually start the test you can only go off of what you remember. Sure you might remember bits and pieces, but it is not the same thing as being able to directly pull from any textbook you want at any time.
It would require you to have a photographic memory (or in the case of ChatGPT, terabytes of VRAM) to be able to perfectly remember the entirety of your textbooks during the test.
It doesn't have to have a copy of all copyrighted works it trained from in order to violate copyright law, just a single one.
However, this does bring up a very interesting question that I'm not sure the law (either textual or common law) is established enough to answer: how easily accessible does a copy of a copyrighted work have to be from an otherwise openly accessible data store in order to violate copyright?
In this case, you can view the weights of a neural network model as that data store. As the network trains on a data set, some human-inscrutable portion of that data is encoded in those weights. The argument has been that because it's only a "portion" of the data covered by copyright being encoded in the weights, and because the weights are some irreversible combination of all of such "portions" from all of the training data, that you cannot use the trained model to recreate a pristine chunk of the copyrighted training data of sufficient size to be protected under copyright law. Attacks like this show that not to be the case.
However, attacks like this seem only able to recover random chunks of training data. So someone can't take a body of training data, insert a specific copyrighted work in the training data, train the model, distribute the trained model (or access to the model through some interface), and expect someone to be able to craft an attack to get that specific work back out. In other words, it's really hard to orchestrate a way to violate someone's copyright on a specific work using LLMs in this way. So the courts will need to decide if that difficulty has any bearing, or if even just a non-zero possibility of it happening is enough to restrict someone's distribution of a pre-trained model or access to a pre-trained model.
I don't think it really matters how accessible it is, what matters is the purpose of use. In a nutshell, fair use covers education, news and criticism. After that, the first consideration is whether the use is commercial in nature.
ChatGPT's use isn't education (research), they're developing a commercial product - even the early versions were not so much prototypes but a part of the same product they have today. Even if it were considered as a research fair use exception, the product absolutely is commercial in nature.
Whether or not data was openly accessible doesn't really matter - more than likely the accessible data itself is a copyright violation. That would be a separate violation, but it absolutely does not excuse ChatGPT's subsequent violation. ChatGPT also isn't just reading the data at its source, it's copying it into its training dataset, and that copying is unlicensed.
Actually, the act of copying a work covered by copyright is not itself illegal. If I check out a book from a library and copy a passage (or the whole book!) for rereading myself or some other use that is limited strictly to myself, that's actually legal. If I turn around and share that passage with a friend in a way that's not covered under fair use, that's illegal. It's the act of distributing the copy that's illegal.
That's why whether the AI model is publicly accessible does matter. A company is considered a "person" under copyright law. So OpenAI can scrape all the copyrighted works off the internet it wants, as long as it didn't break laws to gain access to them. (In other words, articles freely available on CNN's website are free to be copied (but not distributed), but if you circumvent the New York Times' paywall to get articles you didn't pay for, then that's not legal access.) OpenAI then encodes those copyrighted works in its models' weights. If it provides open access to those models, and people execute these attacks to recover pristine copies of copyrighted works, that's illegal distribution. If it keeps access only for employees, and they execute attacks that recover pristine copies of copyrighted works, that's keeping the copies within the use of the "person" (company), so it is not illegal. If they let their employees take the copyrighted works home for non-work use (or to use the AI model for non-work use and recover the pristine copies), that's illegal distribution.
I'm going to need you to back that up with a source. Specifically, legislation.
What you're getting at here is the fair use exemption for education or research, which I have already explained. When considering fair use, it has to be for specific use cases (education, research, news, criticism, or comment). Then, after that, the first thing the court considers is whether the use is commercial in nature. The second is the amount of copying.
You checking a book out of a library and copying down a passage will almost certainly be education/research, and probably noncommercial, so it will most likely be fair use. Copying the whole book might also be fair use, but it is less likely to be so. Copying a book for a commercial report is far less likely.
The fact that it's "strictly limited to yourself" has no real bearing in law. Like I say, this isn't research - they're not writing academic papers and releasing their dataset for others to reproduce and prove their work - and even the earliest versions of their training have some presence in the existing commercial product they have developed. Their use is thus not research, so not fair use, and even if you considered it as research it is highly commercial in nature and they are copying full work into their training dataset.
Bringing in the whole "the law treats corporations as people" is further proving you don't really know how IP law works. Just because something is published and freely accessible does not give the reader unlimited copyright to it. Fair use is an extremely limited exemption.