this post was submitted on 12 Apr 2025
29 points (91.4% liked)

Ask Lemmygrad

943 readers
17 users here now

A place to ask questions of Lemmygrad's best and brightest

founded 2 years ago
MODERATORS
 

Hey there, sometimes I see people say that AI art is stealing real artists' work, but I also saw someone say that AI doesn't steal anything, does anyone know for sure? Also here's a twitter thread by Marxist twitter user 'Professional hog groomer' talking about AI art: https://x.com/bidetmarxman/status/1905354832774324356

top 50 comments
sorted by: hot top controversial new old
[–] [email protected] 3 points 17 hours ago

AI and so many other pointless online discourse^TM^ that can summarized as "does X suck/should X be abolished/will X exist under socialism" follow two basic sides of an argument:

  1. X only sucks because of capitalism and under socialism, X will actually be good for society.

  2. X will undergo such qualitative change under socialism that it is no longer X but Y.

All AI discourse^TM^ follow this basic pattern. On one side, you have people like bidetmarxman who argue that AI only sucks because capitalism sucks and on the other side, you have people say that AI sucks while also saying that the various algorithms and technologies that are present in useful automation doesn't count as AI but is something different.

The way to not fall into the trap is to ask these simple questions:

  1. Does X exists in AES?

  2. What is AES's relationship with X?

If we try to apply this to AI in general, the answers are very simple. AI is not only pushed by the Chinese state, but it's already very much part of Chinese society where even average people benefit from things like self-driving buses. China is even incorporating AI within its educational curriculum. This makes sense since people are going to use it anyways, so might as well educate them on proper use and the pitfalls of misuse.

The question of AI art within China is far murkier. There seems to be some hesitation. For example, there was a recent law passed that stated AI art must be labeled as such. I don't think they would make an effort to enforce disclosure of AI art being AI art if it were so innocent.

[–] [email protected] 8 points 3 days ago

A lot of computer algorithms are inspired by nature. Sometimes when we can't figure out a problem, we look and see how nature solves it and that inspires new algorithms to solve those problems. One problem computer scientists struggled with for a long time is tasks that are very simple to humans but very complex for computers, such as simply converting spoken works into written text. Everyone's voice is different, and even those same people may speak in different tones, they may have different background audio, different microphone quality, etc. There are so many variables that writing a giant program to account for them all with a bunch of IF/ELSE statements in computer code is just impossible.

Computer scientists recognized that computers are very rigid logical machines that computer instructions serially like stepping through a logical proof, but brains are very decentralized and massively parallelized computers that process everything simulateously through a network of neurons, whereby its "programming" is determined by the strength of the neural connections between the neurons, that are analogue and not digital and only produce approximate solutions and aren't as rigorous as a traditional computer.

This led to the birth of the artificial neural network. This is a mathematical construct that describes a system with neurons and configurable strengths of all its neural connections, and from that mathematicians and computer scientists figured out ways that such a neural network could also be "trained," i.e. to configure its neural pathways automatically to be able to "learn" new things. Since it is mathematical, it is hardware-independent. You could build dedicated hardware to implement it, a silicon brain if you will, but you could also simulate it on a traditional computer in software.

Computer scientists quickly found that applying this construct to problems like speech recognition, they could supply the neural network tons of audio samples and their transcribed text and the neural network would automatically find patterns in it and generalize from it, and when new brand audio is recorded it could transcribe it on its own. Suddenly, problems that at first seemed unsolvable became very solvable, and it started to be implemented in many places, such as language translation software also is based on artificial neural networks.

Recently, people have figured out this same technology can be used to produce digital images. You feed a neural network a huge dataset of images and associated tags that describe it, and it will learn to generalize patterns to associate the images and the tags. Depending upon how you train it, this can go both ways. There are img2txt models called vision models that can look at an image and tell you in written text what the image contains. There are also txt2img models which you can feed it a description of an image and it will generate and image based upon it.

All the technology is ultimately the same between text-to-speech, voice recognition, translation software, vision models, image generators, LLMs (which are txt2txt), etc. They are all fundamentally doing the same thing, just taking a neural network with a large dataset of inputs and outputs and training the neural network so it generalizes patterns from it and thus can produce appropriate responses from brand new data.

A common misconception about AI is that it has access to a giant database and the outputs it produces are just stitched together from that database, kind of like a collage. However, that's not the case. The neural network is always trained with far more data that can only possibly hope to fit inside the neural network, so it is impossible for it to remember its entire training data (if it could, this would lead to a phenomena known as overfitting which would render it nonfunctional). What actually ends up "distilled" in the neural network is just a big file called the "weights" file which is a list of all the neural connections and their associated strengths.

When the AI model is shipped, it is not shipped with the original dataset and it is impossible for it to reproduce the whole original dataset. All it can reproduce is what it "learned" during the training process.

When the AI produces something, it first has an "input" layer of neurons kind of like sensory neurons, such as, that input may be the text prompt, may be image input, or something else. It then propagates that information through the network, and when it reaches the end, that end set of neurons are "output" layers of neurons which are kind of like motor neurons that are associated with some action, lot plotting a pixel with a particular color value, or writing a specific character.

There is a feature called "temperature" that injects random noise into this "thinking" process, that way if you run the algorithm many times, you will get different results with the same prompt because its thinking is nondeterministic.

Would we call this process of learning "theft"? I think it's weird to say it is "theft," personally, it is directly inspired by biological systems learn, of course with some differences to make it more suited to run on a computer but the very broad principle of neural computation is the same. I can look at a bunch of examples on the internet and learn to do something, such as look at a bunch of photos to use as reference to learn to draw. Am I "stealing" those photos when I then draw an original picture of my own? People who claim AI is "stealing" either don't understand how the technology works or just reach to the moon claiming things like it doesn't have a soul or whatever so it doesn't count, or just pointing to differences between AI and humans which are indeed different but aren't relevant differences.

Of course, this only applies to companies that scrape data that really are just posted publicly so everyone can freely look at, like on Twitter or something. Some companies have been caught scraping data illegally that were never put anywhere publicly, like Meta who got in trouble for scraping libgen, which a lot of stuff on libgen is supposed to be behind a paywall. However, the law already protects people who get their paywalled data illegally scraped as Meta is being sued over this, so it's already on the side of the content creator here.

Even then, I still wouldn't consider it "theft." Theft is when you take something from someone which deprives them of using it. In that case it would be piracy, when you copy someone's intellectual property for your own use without their permission, but ultimately it doesn't deprive the original person of the use of it. At best you can say in some cases AI art, and AI technology in general, can based on piracy. But this is definitely not a universal statement. And personally I don't even like IP laws so I'm not exactly the most anti-piracy person out there lol

[–] [email protected] 5 points 3 days ago

I don't wanna get too deep into the weeds of the AI debate because I frankly have a knee jerk dislike for AI but from what I can skim from hog groomer's take I agree with their sentiment. A lot of the anti-AI sentiment is based on longing for an idyllic utopia where a cottage industry of creatives exist protected from technological advancements. I think this is an understandable reaction to big tech trying to cause mass unemployment and climate catastrophe for a dollar while bringing down the average level of creative work. But stuff like this prevents sincerely considering if and how AI can be used as tooling by honest creatives to make their work easier or faster or better. This kind of nuance as of now has no place in the mainstream because the mainstream has been poisoned by a multi-billion dollar flood of marketing material from big tech consisting mostly of lies and deception.

[–] [email protected] 13 points 4 days ago (6 children)

The messaging from the anti-generative-AI people is very confused and self-contradictory. They have legitimate concerns, but when the people who say "AI art is trash, it's not even art" also say "AI art is stealing our jobs"...what?

I think the "AI art is trash" part is wrong. And it's just a matter of time before its shortcomings (aesthetic consistency, ability to express complexity etc) are overcome.

The push against developing the technology is misdirected effort, as it always is with liberals. It's just delaying the inevitable. Collective effort should be aimed at affecting who has control of the technology, so that the bourgeoisie can't use it to impoverish artists even more than they already have. But that understanding is never going to take root in the West because the working class there have been generationally groomed by their bourgeois masters to be slave-brained forever losers.

[–] [email protected] 7 points 4 days ago (1 children)

It's a disruptive new technology that disrupt an industry that already has trouble giving a living to people in the western world.

The reaction is warranted but it's now a fact of life. It just show how stupid our value system is and most liberal have trouble reconciling that their hardship is due to their value and economic system.

It's just another mean of automation and should be seized by the experts to gain more bargaining power, instead they fear it and bemoan reality.

So nothing new under the sun...

[–] [email protected] 6 points 3 days ago (1 children)

It’s a disruptive new technology that disrupt an industry that already has trouble giving a living to people in the western world.

Yes, and the solution to the new trouble is exactly the same as the solution to the old trouble, but good luck trying to tell that to liberals when they have a new tree to bark up.

[–] [email protected] 4 points 3 days ago

I tried but they are so far into thinking that communism does not work ...

[–] [email protected] 8 points 4 days ago

It can be frustrating sometimes. I've encountered people online before who I otherwise respected in their takes on things and then they would go viciously anti-AI in a very simplistic way and, having followed the subject in a lot of detail, engaging directly with services that use AI and people who use those services, and trying to discern what makes sense as a stance to have and why, it would feel very shallow and knee-jerk to me. I saw for example how with one AI service, Replika, there were on the one hand people whose lives were changed for the better by it and on the other hand people whose lives were thrown for a loop (understatement of the century) when the company acted duplicitously and started filtering their model in a hamfisted way that made it act differently and reject people over things like a roleplayed hug. There's more to that story, some of which I don't remember in as much detail now because it happened over a year ago (maybe over two years ago? has it been that long?). But point is, I have seen directly people talk of how AI made a difference for them in some way. I've also seen people hurt by it, usually as an indirect result of a company's poor handling of it as a service.

So there are the fears that surround it and then there is what is happening in the day to day, and those two things aren't always the same. Part of the problem is the techbro hype can be so viciously pro-AI that it comes across as nothing more than a big scam, like NFTs. And people are not wrong to think the hype is overblown. They are not wrong to understand that AI is not a magic tool that is going to gain self-awareness and save us from ourselves. But it does do something and that something isn't always a bad thing. And because it does do positive things for some people, some people are going to keep trying to use it, no matter how much it is stigmatized.

[–] [email protected] 5 points 4 days ago (7 children)
load more comments (7 replies)
load more comments (3 replies)
[–] [email protected] 11 points 4 days ago* (last edited 4 days ago) (2 children)

I believe the main issue with AI currently is its lack of transparency. I do not see any disclosure on how the AI gathers its data (Though I'd assume they just scrape it from Google or other image sources) and I believe that this is why many of us believe that AI is stealing people's art. (even though the art can just as easily be stolen with a simple screenshot even without AI, and stolen art being put on t-shirts has been a thing even before the rise of AI, not that it makes AI art theft any less problematic or demoralizing for aspiring artists) Also, the way companies like Google and Meta use AI raises tons of privacy concerns IMO, especially given their track record of stealing user data even before the rise of AI.

Another issue I find with AI art/images is just how spammy they are. Sometimes I search for references to use for drawing (oftentimes various historical armors because I'm a massive nerd) as a hobby, only to be flooded with AI slop, which doesn't even get the details right pretty much all the time.

I believe that if AI models were primarily open-source (like DeepSeek) and with data voluntarily given by real volunteers, AND are transparent enough to tell us what data they collect and how, then much of the hate AI is currently receiving will probably dissipate. Also, AI art as it currently exists is soulless as fuck IMO. One of the only successful implementations of AI in creative works I have seen so far is probably Neuro-Sama.

[–] [email protected] 10 points 4 days ago

I very much agree, and I think it's worth adding that if open source models don't become dominant then we're headed for a really dark future where corps will control the primary means of content generation. These companies will get to decide what kind of content can be produced, where it can be displayed, and so on.

The reality of the situation is that no amount of whinging will stop this technology from being developed further. When AI development occurs in the open, it creates a race-to-the-bottom dynamic for closed systems. Open-source models commoditize AI infrastructure, destroying the premium pricing power of proprietary systems like GPT-4. No company is going to be spending hundreds of millions training a model when open alternatives exist. Open ecosystems also enjoy stronger network effects attracting more contributors than is possible with any single company's R&D budget. How this technology is developed and who controls it is the constructive thing to focus on.

load more comments (1 replies)
[–] [email protected] 21 points 5 days ago* (last edited 5 days ago) (7 children)

The privitisation of the technology is bad but not the technology itself. Labor should be socialised and to be against this is not marxist.

Properietorship is heavily baked into our modern cultures due to liberalism so you are going to hear a lot of bad takes such as "stealing" or moralism based on subjective quality on a given AI arts' aesthetics (even if you were to homegenise the level of "quality" to call it substandard, all it then means is that the technology should improve. Talking, for example, the "soul" of art is just metaphysical nonsense. Human beings and their productions do not possess some other-worldly mysticism) - even from people who consider themselves marxists and communists.

The advance of technology at the cost of an individual's job is the fault of the organisation and allpcation of resources, ie capital, not the technology itself. Put it this way: people can be free to make art however they want to and their livelihood should not have to depend on it.

If you enjoyed baking but lamented the industrialisation and mechanisation of baking because it costed your livelihood and you said it was because the machines were stealing your methods and the taste of the products weren't as good would we still consider it a marxist position? Of course not.

The correct takes could be found here:

If you're a marxist, do not lament the weaver for the machine (Alice Malone): https://redsails.org/the-sentimental-criticism-of-capitalism/

Marxism is not workerism or producerism; both could lead to fascism.

Artisans being concerned about proleterisation as they effectively lose their labor aristocracy or path to petite-bourgoisie may attempt to protect their material perspectives and have reactionary takes. Again this obviously is not marxist.

TLDR - bidetmarxman is correct. I would argue lot of so-called socialists need self-reflection but like I said their view probably reflect their relative class positions and it is really hard to convince someone against their perceived personal material benefits.

load more comments (7 replies)
[–] [email protected] 14 points 5 days ago (30 children)

What people are really upset with is the way this technology is applied under capitalism. I see absolutely no problem with generative AI itself, and I'd argue that it can be a tool that allows more people to express themselves. People who argue against AI art tend to conflate the technical skill and the medium being used with the message being conveyed by the artist. You could apply same argument to somebody using a tool like Krita and claim it's not real art because the person using it didn't spend years learning how to paint using oils. It's a nonsensical argument in my opinion.

Ultimately, the art is in the eye of the beholder. If somebody looks at a particular image and that image conveys something to them or resonates with them in some way, that's what matters. How the image was generated doesn't really matter in my opinion. You could make a comparison with photography here as well. A photographer doesn't create the image that the camera captures, they have an eye for selecting scenes that are visually interesting. You can give a camera to a random person on the street, and they likely won't produce anything you'd call art. Yet, you give the same camera to a professional and you're going to get very different results.

Similarly, anybody can type some text into a prompt and produce some generic AI slop, but an artists would be able to produce an interesting image that conveys some message to the viewer. It's also worth noting that workflows in tools like ComfyUI are getting fairly sophisticated, and go far beyond typing a prompt to get an image.

My personal view is that this tech will allow more people to express themselves, and the slop will look like slop regardless whether it's made with AI or not. If anything, I'd argue that the barrier to making good looking images being lowered means that people will have to find new ways to make art expressive beyond just technical skill. This is similar to the way graphics in video games stopped being the defining characteristic. Often, it's indie games with simple graphics that end up being far more interesting.

load more comments (30 replies)
load more comments
view more: next ›