this post was submitted on 21 May 2024
78 points (97.6% liked)

Technology

1279 readers
12 users here now

Which posts fit here?

Anything that is at least tangentially connected to the technology, social media platforms, informational technologies and tech policy.


Rules

1. English onlyTitle and associated content has to be in English.
2. Use original linkPost URL should be the original link to the article (even if paywalled) and archived copies left in the body. It allows avoiding duplicate posts when cross-posting.
3. Respectful communicationAll communication has to be respectful of differing opinions, viewpoints, and experiences.
4. InclusivityEveryone is welcome here regardless of age, body size, visible or invisible disability, ethnicity, sex characteristics, gender identity and expression, education, socio-economic status, nationality, personal appearance, race, caste, color, religion, or sexual identity and orientation.
5. Ad hominem attacksAny kind of personal attacks are expressly forbidden. If you can't argue your position without attacking a person's character, you already lost the argument.
6. Off-topic tangentsStay on topic. Keep it relevant.
7. Instance rules may applyIf something is not covered by community rules, but are against lemmy.zip instance rules, they will be enforced.


Companion communities

[email protected]
[email protected]


Icon attribution | Banner attribution

founded 11 months ago
MODERATORS
 

cross-posted from: https://lemmy.zip/post/15863526

Steven Anderegg allegedly used the Stable Diffusion AI model to generate photos; if convicted, he could face up to 70 years in prison

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 33 points 5 months ago (35 children)

How are they abuse images if no abuse took place to create them?

[–] [email protected] 3 points 5 months ago (3 children)

If the model was trained on csam then it is dependent on abuse

[–] [email protected] 25 points 5 months ago (1 children)

That's a heck of a slippery slope I just fell down.

If responses generated from AI can be held criminally liable for their training data's crimes, we can all be held liable for all text responses from GPT, since it's being trained on reddit data and likely has access to multiple instances of brigading, swatting, man hunts, etc.

[–] [email protected] 2 points 5 months ago

You just summarized the ongoing ethical concerns experts and common folk alike have been talking about in the past few years.

[–] [email protected] 19 points 5 months ago

As I said in my other comment, the model does not have to be trained on CSAM to create images like this.

[–] [email protected] 1 points 5 months ago (1 children)

That irrelevant, any realistic depiction of children engaged in sexual activity meets the legal definition of csam. Even using filters on images of consenting adults could qualify as csam if the intent was to make the actors appear underage.

[–] [email protected] 3 points 5 months ago* (last edited 5 months ago) (2 children)

Because they are images of children being graphically raped, a form of abuse. Is an AI generated picture of a tree not a picture of a tree?

[–] [email protected] 4 points 5 months ago* (last edited 5 months ago) (4 children)

No it isn't, not anymore than a drawing of a car is a real car, or drawings of money are real money.

[–] [email protected] 4 points 5 months ago (13 children)

Material showing a child being sexually abused is child sexual abuse material.

load more comments (13 replies)
[–] [email protected] 1 points 5 months ago (1 children)

Nobody is saying they're real, and I now see what you're saying.

By your answers, your question is more "at-face-value" than people assume:

You are asking:

"Did violence occur in real life in order to produce this violent picture?"

The answer is, of course, no.

But people are interpreting it as:

"This is a picture of a man being stoned to death. Is this picture violent, if no violence took place in real life?"

To which answer is, yes.

[–] [email protected] 1 points 5 months ago (6 children)

It can be abhorrent and unlikable, its still not abuse

[–] [email protected] 2 points 5 months ago (3 children)

We're not disagreeing.

The question was:

"Is this an abuse image if it was generated?"

Yes, it is an abuse image.

Is it actual abuse? Of course not.

load more comments (3 replies)
load more comments (5 replies)
load more comments (2 replies)
[–] [email protected] 3 points 5 months ago (4 children)

It's a picture of a hallucination of a tree. Distinguishing real from unreal ought to be taken more seriously given the direction technology is moving.

load more comments (4 replies)
[–] [email protected] 2 points 5 months ago (1 children)

All the lemmy.world commenters came out to insist "that painting is a pipe, though."

Yeah? Smoke it.

[–] [email protected] 2 points 5 months ago (1 children)

Lemmy.world and bandwagoning on a sensitive topic that they know nothing about? Classic combo.

[–] [email protected] 2 points 5 months ago (1 children)

You'd figure "CSAM" was clear enough. You'd really figure. But apparently we could specify "PECR" for "photographic evidence of child rape" and people would still insist "he drew PECR!" Nope. Can't. Try again.

load more comments (1 replies)
[–] [email protected] 1 points 5 months ago (1 children)

I mean... regardless of your moral point of view, you should be able to answer that yourself. Here's an analogy: suppose I draw a picture of a man murdering a dog. It's an animal abuse image, even though no actual animal abuse took place.

[–] [email protected] 3 points 5 months ago (1 children)

Its not though, its just a drawing.

[–] [email protected] 1 points 5 months ago (1 children)

Except that it is an animal abuse image, drawing, painting, fiddle, whatever you want to call it. It's still the depiction of animal abuse.

Same with child abuse, rape, torture, killing or beating.

Now, I know what you mean by your question. You're trying to establish that the image/drawing/painting/scribble is harmless because no actual living being suffering happened. But that doesn't mean that they don't depict it.

Again, I'm seeing this from a very practical point of view. However you see these images through the lens of your own morals or points of view, that's a totally different thing.

[–] [email protected] 3 points 5 months ago (1 children)

And when characters are killed on screen in movies, are those snuff films?

[–] [email protected] 2 points 5 months ago

No, they're violent films.

Snuff is a different thing, because it's supposed to be real. Snuff films depict violence in a very real sense. So so they're violent. Fiction films also depict violence. And so they're violent too. It's just that they're not about real violence.

I guess what you're really trying to say is that "Generated abuse images are not real abuse images." I agree with that.

But at face value, "Generated abuse images are not abuse images" is incorrect.

load more comments (31 replies)