this post was submitted on 22 Aug 2023
767 points (95.7% liked)

Technology

59581 readers
3228 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

OpenAI now tries to hide that ChatGPT was trained on copyrighted books, including J.K. Rowling's Harry Potter series::A new research paper laid out ways in which AI developers should try and avoid showing LLMs have been trained on copyrighted material.

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 6 points 1 year ago* (last edited 1 year ago)

It's honestly a good question. It's perfectly legal for you to memorize a copyrighted work. In some contexts, you can recite it, too (particularly the perilous fair use). And even if you don't recite a copyrighted work directly, you are most certainly allowed to learn to write from reading copyrighted books, then try to come up with your own writing based off what you've read. You'll probably try your best to avoid copying anyone, but you might still make mistakes, simply by forgetting that some idea isn't your own.

But can AI? If we want to view AI as basically an artificial brain, then shouldn't it be able to do what humans can do? Though at the same time, it's not actually a brain nor is it a human. Humans are pretty limited in what they can remember, whereas an AI could be virtually boundless.

If we're looking at intent, the AI companies certainly aren't trying to recreate copyrighted works. They've actively tried to stop it as we can see. And LLMs don't directly store the copyrighted works, either. They're basically just storing super hard to understand sets of weights, which are a challenge even for experienced researchers to explain. They're not denying that they read copyrighted works (like all of us do), but arguably they aren't trying to write copyrighted works.