this post was submitted on 10 Jul 2023
89 points (100.0% liked)

Technology

12 readers
1 users here now

This magazine is dedicated to discussions on the latest developments, trends, and innovations in the world of technology. Whether you are a tech enthusiast, a developer, or simply curious about the latest gadgets and software, this is the place for you. Here you can share your knowledge, ask questions, and engage in discussions on topics such as artificial intelligence, robotics, cloud computing, cybersecurity, and more. From the impact of technology on society to the ethical considerations of new technologies, this category covers a wide range of topics related to technology. Join the conversation and let's explore the ever-evolving world of technology together!

founded 2 years ago
 

In addition to the possible business threat, forcing OpenAI to identify its use of copyrighted data would expose the company to potential lawsuits. Generative AI systems like ChatGPT and DALL-E are trained using large amounts of data scraped from the web, much of it copyright protected. When companies disclose these data sources it leaves them open to legal challenges. OpenAI rival Stability AI, for example, is currently being sued by stock image maker Getty Images for using its copyrighted data to train its AI image generator.

Aaaaaand there it is. They don’t want to admit how much copyrighted materials they’ve been using.

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 9 points 1 year ago* (last edited 1 year ago) (18 children)

If I do a book report based on a book that I picked up from the library, am I violating copyright? If I write a movie review for a newspaper that tells the plot of the film, am I violating copyright? Now, if the information that they have used is locked behind paywalls and obtained illegally, then sure, fire ze missiles, but if it is readily accessible and not being reprinted wholesale by the AI, then it doesn't seem that different from any of the other millions of ways we use data in every day life. Just because a machine learned it instead of a human, I don't believe that it makes it inherently wrong. I am very open to discussion on this, and if anyone has a counter-argument, I'd love to hear it, because this is a new field of technology that we should all talk about and learn to understand better.

Edit: I asked GPT-4 what it thought about this, and here is what it said:

As an AI developed by OpenAI, I don't access any copyrighted databases, subscription sites, confidential information, or proprietary databases for my learning or response generation. I was trained on a diverse range of internet text, which includes publicly available data in multiple languages and formats. The training also involves learning from large-scale datasets that contain parts of many kinds of texts (books, websites, scientific articles, etc.). However, I don't know specifics about which documents were in my training set or have access to any specific documents or sources.

Your question seems to pertain to how copyright law applies to AI. This is a complex and evolving area of law and policy, largely because AI systems like me do not "read" or "use" material in the same way a human would. A human might read a book and then consciously recall information from it when writing or speaking. In contrast, AI models do not have consciousness or awareness, they analyze statistical patterns in the data they were trained on and use these patterns to generate responses.

When a human uses copyrighted material to create something new, the law often considers this to be a derivative work. It is typically necessary to obtain permission from the copyright holder to create derivative works. However, AI doesn't use or recall copyrighted material directly; instead, it learns patterns from a huge amount of text data to predict or generate text based on a given prompt.

Dissemination of copyrighted material, such as through forum posts or Q&A platforms, without proper licensing or within the bounds of fair use, can be a violation of copyright law. AI doesn't 'disseminate' copyrighted material; it generates responses based on patterns it has learned.

That said, the legal interpretation of how copyright law applies to machine learning and AI is still evolving, and there can be a lot of complexities and nuances. Therefore, the information here should not be taken as legal advice.

As of my last update in September 2021, this was the case. However, I recommend checking the most recent legal standpoints and opinions, as this area is still evolving rapidly.

[–] [email protected] 7 points 1 year ago* (last edited 1 year ago) (1 children)

@chemical_cutthroat

If I do a book report based on a book that I picked up from the library, am I violating copyright? If I write a movie review for a newspaper that tells the plot of the film, am I violating copyright?

The first conceptual mistake in this analogy is assuming the LLM entity is "writing". A person or a sentient being writing is still showing signs of intellectual work, which is how the example book report and movie review will not be accused of plagiarism, which is very very basically stealing someone's output but one that is not made legally ownership of (which then brings it to copyright infringement territory).

LLMs are producing text based on statistical probability meaning it is quite literally aping/replicating the aesthetic form of a known genre of textual output, which in these cases are given the legal status of intellectual property. So yes, an LLM-generated textual output that is in the form of a book report or movie review looks the way it does by copying with no creative intent previous works of the genre. It's the same way YouTube video essays get taken down if it's just a collection of movie clips that might sound like a full dialogue. Of course in that example yt clip, if you can argue it's a creative output where an artist is forming a new piece out of a collage of previous media, the rights owner to those movie clips might lose their claim to the said video. You can't make that defence with OpenAI.

@stopthatgirl7

[–] [email protected] 1 points 1 year ago (1 children)

If you can truly tell me how our form of writing is any different than how an AI writes, I'll do a backflip. Humans are pattern seekers. We do everything based on one. We can't handle chaos. Here's an example.

Normal sentence:

Jane walked to the end of the road and turned around.

Chaotic Sentence:

The terminal boundary of the linear thoroughfare did Jane ambulate toward, then her orientation underwent a 180-degree about-face, confounding the conventional concept of destinational progression.

On first pass, I bet you zoned out half way through that second sentence because there was no pattern or rhythm to it, it was word salad. It still works as a sentence, but it's chaotic and strange to read.

The first sentence is a generic sentence. Subject, predicate, noun, verb, etc. It follows the pattern of English writing that we are all familiar with because it's how we were taught. An AI will do the same thing. It will generate a pattern of speech the same way that it was taught. Now, if you were taught in a public school and didn't read a book or watch a movie for your entire life, I would let you have your argument that

@cendawanita

an LLM-generated textual output that is in the form of a book report or movie review looks the way it does by copying with no creative intent previous works of the genre.

However, you can't say that a human does any different. We are the sum of our experience and our teachings. If you get truly granular with it, you can trace the genesis of every sentence a human writes or even every thought a human thinks back to a point of inception, where the human learned how to write and think in the first place, and it will always be based on some sensory experience that the human has had, whether through reading, listening to music, watching a movie, or any other way we consume the data around us. The second sentence is an example of this. I thought to myself, "how would a pedantic asshat write this sentence?" and I wrote it. It didn't come from some grand creative well of sentience that every human can draw from when they need a sentence, it came from experience and learning, just like the first, and the same well of knowledge than an AI draws from when it writes its sentences.

[–] [email protected] 1 points 1 year ago

@chemical_cutthroat
Again, all of your analogical effort presumes that an LLM is synthesizing. When I say, specifically, they generate outputs based on statistical probability it's not at all the same as a sentient process of reiterative learning based on their available knowledge.

If you can't get that distinction, then all the effort to respond to you will expect too much from me (personally; I wish the best to others who'd like). If you're really sincere though, honestly it's been best elaborated by Timnit Gebru and Emily Bender in their writings about the "stochastic parrot". Please do have a read. https://dl.acm.org/doi/10.1145/3442188.3445922
@stopthatgirl7

load more comments (16 replies)