this post was submitted on 21 Nov 2024
110 points (97.4% liked)

Technology

59525 readers
3864 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

Today, a prominent child safety organization, Thorn, in partnership with a leading cloud-based AI solutions provider, Hive, announced the release of an AI model designed to flag unknown CSAM at upload. It's the earliest AI technology striving to expose unreported CSAM at scale.

top 50 comments
sorted by: hot top controversial new old
[–] [email protected] 5 points 1 hour ago* (last edited 1 hour ago) (1 children)

Jesus Christ. If someone ever got their hands on this model they could use it to generate new material. The grossest possible AI model to date

[–] [email protected] 2 points 1 hour ago

I thought being able to do that was already a thing. This is designed to do the opposite.

I know, I know.. bad actors and such.

[–] [email protected] 1 points 5 hours ago

... robo chocolate?

[–] [email protected] 99 points 21 hours ago* (last edited 21 hours ago) (2 children)

Thorn, the company backed by Ashton Kutcher and which tried to get its way to monitor all messages in the EU via Chat Control. No thanks.

https://fortune.com/europe/2023/09/26/thorn-ashton-kutcher-ylva-johansson-csam-csa-regulation-european-commission-encryption-privacy-surveillance/

[–] [email protected] 48 points 21 hours ago (2 children)

Just remember folks. Kutcher is a slimeball too.

The guy went from a D list star and hanging out with the likes of Danny Masterson and going to Diddy’s infamous parties - to suddenly overnight courting the US government and being the face of ‘helping’ children everywhere.

Yeah right…..

[–] [email protected] 16 points 19 hours ago (2 children)

I’d be wary of calling him guilty by association. Maybe when he realized who he was really hanging out with he was so horrified and disgusted that he just had to get involved and do something to fight back?

load more comments (2 replies)
[–] [email protected] 18 points 20 hours ago (1 children)

People can grow and change. Not saying he did or didn’t. Just saying that people aren’t a monolith. It’s plausible he just grew and his views changed / evolved.

That being said, it’s highly convenient where he’s positioned himself these days…

load more comments (1 replies)
load more comments (1 replies)
[–] [email protected] 105 points 22 hours ago (2 children)

Not a single peep about false positives.

I'm sure it won't be abused though. And if anyone does complain, just get their electronics seized and checked, because they must be hiding something!

[–] [email protected] 3 points 1 hour ago

It could also, of course, make mistakes, but Kevin Guo, Hive's CEO, told Ars that extensive testing was conducted to reduce false positives or negatives substantially. While he wouldn't share stats, he said that platforms would not be interested in a tool where "99 out of a hundred things the tool is flagging aren't correct."

I take this to mean it is at least 1% accurate lol.

[–] [email protected] 66 points 21 hours ago (4 children)

Reminds me of the A cup breasts porn ban in Australia a few years ago, because only pedos would watch that

[–] [email protected] 2 points 1 hour ago

This sort of rhetoric really bothers me. Especially when you consider that there are real adult women with disorders that make them appear prepubescent. Whether that's appropriate for pornography is a different conversation, but the idea that anyone interested in them is a pedophile is really disgusting. That is a real, human, adult woman and some people say anyone who wants to live them is a monster. Just imagine someone telling you that anyone who wants to love you is a monster and that they're actually protecting you.

[–] [email protected] 28 points 14 hours ago (1 children)

There was a a porn studio that was prosecuted for creating CSAM. Brazil i belive. Prosecutors claimed that the petite, A-cup woman was clearly underaged. Their star witness was a doctor who testified that such underdeveloped breasts and hips clearly meant she was still going through puberty and couldn't possible be 18 or older. The porn star showed up to testify that she was in fact over 18 when they shot the film and included all her identification including her birth certificate and passport. She also said something to the effect of women come in all shapes and sizes and a doctor should know better.

I can't find an article. All I'm getting is GOP trump pedo nominees and brazil laws on porn.

[–] [email protected] 2 points 1 hour ago

Pretty sure the adult star was lil Lupe. She was everywhere at the time because she did, indeed, look underage.

[–] [email protected] 35 points 20 hours ago (2 children)

Awe man, I love all titties. Variety is the spice of life.

[–] [email protected] 30 points 20 hours ago (1 children)

Not to mention the self image impact such things would have on women with smaller breasts, who (as I understand it) generally already struggle with poor self image due to breast size.

[–] [email protected] 12 points 19 hours ago

Clearly the state gives zero fucks about these women, or anyone else or even "the children"

Catholic Church is still around for a reason

[–] [email protected] 14 points 20 hours ago (1 children)

Believe it or not, straight to jail.

[–] [email protected] 16 points 20 hours ago (1 children)

If this is the price I must pay, I will pay it, sir! No man should be deprived of privately viewing a consenting adults perfectly formed small tit's. They can take my liberty, they can take my livelihood, but they will never take away my boner for puffy nipples on a small chested half Japanese woman!

[–] [email protected] 11 points 20 hours ago

What is the charge? Biting a breast? A succulent Chinese breast?

load more comments (1 replies)
[–] [email protected] 39 points 22 hours ago (2 children)

I am a bit confused how it is legal for them to have the training data here?

Like is there anything a corpo can't do?

Like why can't subway Jared and Catholic church "train the AI"

Only half way joking, what's the catch here?

[–] [email protected] 1 points 1 hour ago

I don't think you even need the actual stuff to train a neural network to recognize it. For example, if I wanted to train a neural network to recognize pictures of lions, but I didn't have any actual pictures of lions, I could use pictures of lion-shaped things, lion-colored things and locations where lions might appear. If a picture is hitting all three of those, it's very likely to be a lion. Very likely is all a neural network can do, so it's good enough for my purposes.

[–] [email protected] 27 points 21 hours ago (1 children)

There are laws around it. Law enforcement doesn't just delete any digital CSAM they seize.

Known CSAM is archived and analyzed rather than destroyed, and used to recognize additional instances of the same files in the wild. Wherever file scanning is possible.

Institutions and corporation can request licenses to access the database, or just the metadata that allows software to tell if a given file might be a copy of known CSAM.

This is the first time an attempt is being made at using the database to create software able to recognize CSAM that isn't already known.

I'm personally quite sceptical of the merit. It may well be useful for scanning the public internet, but I'm guessing the plan is to push for it to be somehow implemented for private communication, no matter how badly that compromises the integrity of encryption.

[–] [email protected] 12 points 20 hours ago (1 children)

So doesn't that make the law enforcement having the biggest CP collection from everybody? This sounds kinda dangerous...

[–] [email protected] 18 points 19 hours ago* (last edited 19 hours ago)

It does. Kinda.

The police are seldom allowed to be in possession of CSAM, except for in terms of grabbing the hardware which contains it in an arrest. The database used in modern detection tools is maintained by NCMEC which has special permission to do so.

And of course there are risks, but it's just digital data. Unless you are creating more, you're not actively harming anyone. And law enforcement absolutely needs that data to take some of the most obvious steps to prevent it being spread further.

Obviously, someone has access, but to get to the actual media files wouldn't be simple. What typically happens, is that anyone wanting to detect CSAM, is given a hashed version of the database. They can then scan their systems for CSAM by hashing any media they are hosting, and seeing whether there are any matches.

Whenever possible, people aren't handling the actual media. But for any detection to be possible to begin with, the database of the actual media does need to be maintained somewhere.

AI is a touchier subject, as you can't train a model to recognize CSAM not already in the database using hashes, so in those cases you have to work with actual real media. This is only recently becoming a thing.

It also leaves open the possibility for false positives. An oft cited example is parents taking pictures of their own children for innocent reasons, or doctors and parents handling images for valid medical reasons. In a system that flagged such content, it would mean someone else would be seeing that "private" content because it was flagged.

[–] [email protected] 37 points 22 hours ago* (last edited 22 hours ago) (1 children)

It's the earliest AI technology striving to expose unreported CSAM at scale.

horde-safety has been out for a year now. Just saying... It's not a trained AI model in this way, but it's still using Neural Networks (i.e. "AI Technology")

[–] [email protected] 2 points 1 hour ago* (last edited 1 hour ago) (1 children)
[–] [email protected] 2 points 1 hour ago

haha, nah people reported some unexpected censors, and we investigated what part of their prompt might be causing it.

[–] [email protected] 23 points 22 hours ago (4 children)

And will we get that technology to keep the Fediverse and free platforms safe? Probably not. All the predecessors have been kept away for sole use of the big players, despite populism always claiming we need to introduce total surveillance to keep the children safe...

[–] [email protected] 1 points 1 hour ago

IFTAS is already working with Thorn towards this goal. But you already have access to such technology through my toolset.

[–] [email protected] 15 points 22 hours ago (2 children)

I was going to say... Sure would be nice to have this feature in all the open source AI image generator tools but you're absolutely right 😩

[–] [email protected] 11 points 21 hours ago

Yeah, unless someone publishes even a set of hashes of known bad content for the general public... I kind of doubt the true intentions are preventing CSAM to the benefit of everyone.

load more comments (1 replies)
load more comments (2 replies)
[–] [email protected] 13 points 22 hours ago (5 children)

This seems like a potential actual good use of AI. Can't have been much fun to train it though.

And is there any risk of people turning these kinds of models around and using them to generate images?

[–] [email protected] 2 points 1 hour ago

Available image generators are already capable of generating those images and they weren't even trained on it. Once a neural network can detect/generate two separate concepts, it can detect/generate the overlap. It won't be as fine-tuned obviously, but can still turn out scarily accurate.

[–] [email protected] 25 points 22 hours ago (6 children)

If AI was reliable, maybe. MAYBE. But guess what? It turns out that “advanced autocomplete” does a shitty job of most things, and I bet false positives will be numerous.

[–] [email protected] 10 points 21 hours ago

This is not that kind of AI.

load more comments (5 replies)
[–] [email protected] 14 points 20 hours ago

And is there any risk of people turning these kinds of models around and using them to generate images?

There isn't really much fundamental difference between an image detector and an image generator. The way image generators like stable diffusion work is essentially by generating a starting image that's nothing but random static and telling the generator "find the cat that's hidden in this noise."

It'll probably take a bit of work to rig this child porn detector up to generate images, but I could definitely imagine it happening. It's going to make an already complicated philosophical debate even more complicated.

load more comments (2 replies)
load more comments
view more: next ›