this post was submitted on 21 Nov 2024
140 points (97.9% liked)

Technology

59559 readers
3445 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

Today, a prominent child safety organization, Thorn, in partnership with a leading cloud-based AI solutions provider, Hive, announced the release of an AI model designed to flag unknown CSAM at upload. It's the earliest AI technology striving to expose unreported CSAM at scale.

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] -3 points 1 day ago (2 children)

At this point how does it differ w/ generating AI powered CP? morons

[–] [email protected] 9 points 1 day ago (1 children)

Uh, well this one tells you if an image looks like it or not. It doesn’t generate images

[–] [email protected] 0 points 1 day ago (1 children)

If it knows if an image looks like it it can generate something like it, one step further

[–] [email protected] 2 points 1 day ago (1 children)

Correct, this kind of software is trained on CP data. So such models can be easily used to generate CP instead of recognizing it, which makes them very dangerous indeed.

Same idea as the current models that are trained to recognized cars, these models can also be used to generate a car from noise as a starting poiint.

[–] [email protected] 4 points 1 day ago (1 children)

In pretttty sure you can’t just run it in reverse like that. There’s a whole different training and operation methodology you have to use to support generating images rather than simple text classification

[–] [email protected] 2 points 11 hours ago

There is a method of training where you use one system to make things and another to detect them. I forget the name of this approach, but it definitely is an approach.

[–] [email protected] 3 points 1 day ago (1 children)

It differs in basically being something completely different. This is a classification model, doesn't have generative capabilities. Even if you were to get the model and it's weights, and you tried to reverse engineer an "input" that it would classify as CP, it would most likely look like pure noise to you.

Moron

[–] [email protected] 4 points 15 hours ago (1 children)

Generate porn, classificate output, result very young looking models.

Moron

[–] [email protected] -1 points 15 hours ago (3 children)

So you need to have a model that generates CP to begin with. Flawless reasoning there.

Look, it's clear you have no clue what you're talking about. Stop demonstrating it, moron.

[–] [email protected] 1 points 6 hours ago (1 children)

Alright, I found the name of what I was thinking of that sounds similar to what they're suggesting: generative adversarial network (GAN).

The core idea of a GAN is based on the "indirect" training through the discriminator, another neural network that can tell how "realistic" the input seems, which itself is also being updated dynamically. This means that the generator is not trained to minimize the distance to a specific image, but rather to fool the discriminator. This enables the model to learn in an unsupervised manner.

[–] [email protected] 1 points 4 hours ago (1 children)

Applying GAN won't work. If used for filtering would result on results being skewed to a younger, but it won't show 9 the body of a 9 year old unless the model could do that from the beginning.

If used to "tune" the original model, it will result on massive hallucination and aberrations that can result in false positives.

In both cases, decent results will be rare and time consuming. Anybody with the dedication to attempt this already has pictures and can build their own model.

Source: I'm a data scientist

[–] [email protected] 1 points 4 hours ago

At least it's not "Source: I am a pedophile" lol

[–] [email protected] 2 points 12 hours ago* (last edited 11 hours ago) (1 children)

Not CP, but normal porn and select on CP traits, moron

[–] [email protected] 1 points 11 hours ago (1 children)

https://en.m.wikipedia.org/wiki/False_positives_and_false_negatives

Not that I think you will understand. I'm posting this mostly for those moronic enough to read your comments and think "that seems reasonable"

[–] [email protected] 1 points 11 hours ago
[–] [email protected] 1 points 11 hours ago (1 children)

The model I use (I forget the name) popped out something pretty sus once. I wouldn't describe it as CP, but it was definitely weird enough to really make me uncomfortable. It's the only thing it ever made that I immediately deleted and removed from the recycling bin too lol.

The point I'm making is that this isn't as far fetched as you believe.

Plus, you can merge models. Get a general purpose model that knows what children look like, a general purpose pornographic model, merge them, then start generating and selecting images based on Thorn's classifier.

[–] [email protected] 1 points 11 hours ago (1 children)

You can't merge a generative model and a classification model. You can run then in series to get a bunch of false positives/hallucinations, but you can't make it generate something from the other model.

[–] [email protected] 1 points 6 hours ago

When I said a "general purpose model that knows what children look like" I didn't mean the classification model from the article. I meant a normal, general purpose image generation model. When I said "that knows what children look like" I mean part of its training set is on children, because it's sort of trained a little on everything. When I said "pornographic model" I mean a model trained exclusively on NSFW content (and not including any CSAM, but that may be generous depending on how much care was out into the model's creation).