this post was submitted on 26 Dec 2023
111 points (81.4% liked)

Technology

60062 readers
3355 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 2 years ago
MODERATORS
 

Study shows AI image-generators being trained on explicit photos of children::Hidden inside the foundation of popular artificial intelligence image-generators are thousands of images of child sexual abuse, according to a new report that urges companies to take action to address a harmful flaw in the technology they built

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 2 points 1 year ago* (last edited 1 year ago)

I agree with you in instances where it's not generating a real person. But there are cases where people use tools like this to generate realistic-looking but fake images of actual, specific real-life children. This is of course abusive to that child. And it's still bad when it's done to adults too, it's sort of a form of defamation.

I really do hope legislation around this issue is narrowly tailored to actual abuse similar to what I described above, but given the "protect the children" nonsense they constantly moan about just about every technology including end to end encryption I'm not very optimistic.

Another thing I wonder about, is if AI could get so realistic that it becomes impossible to prove beyond a reasonable doubt that anyone with actual CSAM (where the image/victim isn't known so they can't prove it that way) is guilty, since any image could plausibly be fake. This of course is an issue far beyond just child abuse. It would probably discredit video footage for robberies and that sort of thing too. We really are venturing into the unknown and government isn't exactly know for adapting to technology...

But I think you're mostly correct, because the moral outrage on social media seems to be about the entire concept of fake sexual depictions of minors existing at all, rather than only about abusive ones