this post was submitted on 22 Dec 2024
1588 points (97.5% liked)

Technology

60076 readers
3571 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 2 years ago
MODERATORS
 

It's all made from our data, anyway, so it should be ours to use as we want

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 3 points 1 day ago (1 children)

Are you threatening me with a good time?

First of all, whether these LLMs are "illegally trained" is still a matter before the courts. When an LLM is trained it doesn't literally copy the training data, so it's unclear whether copyright is even relevant.

Secondly, I don't think that making these models "public domain" would have the negative effects that people angry about AI think it would. When a company is running a closed model internally, like ChatGPT for example, the model is never available for download in the first place. It doesn't matter if it's public domain or not because you can't get a copy of it. When a company releases an open-weight model for public use, on the other hand, they usually encumber them with some sort of license that makes them harder for competitors to monetize or build on. Making those public-domain would greatly increase their utility. It might make future releases less likely, but in the meantime it'll greatly enhance AI development.

[–] [email protected] 2 points 1 day ago (3 children)

The LLM does reproduce copyrighted data though.

[–] [email protected] 4 points 1 day ago* (last edited 1 day ago) (1 children)

Not 1:1, overfitted images still have considerable differences to their original. If you chose "reproduce" to make that point, that's why OP clarified it wasn't literally copying training data, as the actual data being in the model would be a different story. Because these models are (in simplified form) a bunch of really complex math that produces material, it's a mathematical inevitability that it produces copyrighted material, even for calculations that weren't created due to overfitting. Just like infinite monkeys on infinite typewriters will eventually reproduce every piece of copyrighted text.

But then I would point you to the camera on your phone. If you take a copyrighted picture with that, you're still infringing. But was the camera created with the intention to appropriate material captured by the lens? Which is why we don't blame the camera for that, we blame the person that used it for that purpose. AI users have an ethical obligation not to steer the AI towards generating infringing material.

[–] [email protected] 2 points 1 day ago

And the easiest way to do that is to not include infringing material in the first place.

[–] [email protected] 2 points 1 day ago

*it can produce data identical to data that has been copyrighted before