cross-posted from: https://rss.ponder.cat/post/193608
Schools, parents, police, and existing laws are not prepared to deal with the growing problem of students and minors using generative AI tools to create child sexual abuse material of other their peers, according to a new report from researchers at Stanford Cyber Policy Center.
The report, which is based on public records and interviews with NGOs, internet platforms staff, law enforcement, government employees, legislators, victims, parents, and groups that offer online training to schools, found that despite the harm that nonconsensual causes, the practice has been normalized by mainstream online platforms and certain online communities.
“Respondents told us there is a sense of normalization or legitimacy among those who create and share AI CSAM,” the report said. “This perception is fueled by open discussions in clear web forums, a sense of community through the sharing of tips, the accessibility of nudify apps, and the presence of community members in countries where AI CSAM is legal.”
The report says that while children may recognize that AI-generating nonconsensual content is wrong they can assume “it’s legal, believing that if it were truly illegal, there wouldn’t be an app for it.” The report, which cites several 404 Media stories about this issue, notes that this normalization is in part a result of many “nudify” apps being available on the Google and Apple app stores, and that their ability to AI-generate nonconsensual nudity is openly advertised to students on Google and social media platforms like Instagram and TikTok. One NGO employee told the authors of the report that “there are hundreds of nudify apps” that lack basic built-in safety features to prevent the creation of CSAM, and that even as an expert in the field he regularly encounters AI tools he’s never heard of, but that on certain social media platforms “everyone is talking about them.”
The report notes that while 38 U.S. states now have laws about AI CSAM and the newly signed federal Take It Down Act will further penalize AI CSAM, states “failed to anticipate that student-on-student cases would be a common fact pattern. As a result, that wave of legislation did not account for child offenders. Only now are legislators beginning to respond, with measures such as bills defining student-on-student use of nudify apps as a form of cyberbullying.”
One law enforcement officer told the researchers how accessible these apps are. “You can download an app in one minute, take a picture in 30 seconds, and that child will be impacted for the rest of their life,” they said.
One student victim interviewed for the report said that she struggled to believe that someone actually AI-generated nude images of her when she first learned about them. She knew other students used AI for writing papers, but was not aware people could use AI to create nude images. “People will start rumors about anything for no reason,” she said. “It took a few days to believe that this actually happened.”
Another victim and her mother interviewed for the report described the shock of seeing the images for the first time. “Remember Photoshop?” the mother asked, “I thought it would be like that. But it’s not. It looks just like her. You could see that someone might believe that was really her naked.”
One victim, whose original photo was taken from a non-social media site, said that someone took it and “ruined it by making it creepy [...] he turned it into a curvy boob monster, you feel so out of control.”
In an email from a victim to school staff, one victim said “I was unable to concentrate or feel safe at school. I felt very vulnerable and deeply troubled. The investigation, media coverage, meetings with administrators, no-contact order [against the perpetrator], and the gossip swirl distracted me from school and class work. This is a terrible way to start high school.”
One mother of a victim the researchers interviewed for the report feared that the images could crop up in the future, potentially affecting her daughter’s college applications, job opportunities, or relationships. “She also expressed a loss of trust in teachers, worrying that they might be unwilling to write a positive college recommendation letter for her daughter due to how events unfolded after the images were revealed,” the report said.
💡Has AI-generated content been a problem in your school? I would love to hear from you. Using a non-work device, you can message me securely on Signal at emanuel.404. Otherwise, send me an email at [email protected].
In 2024, Jason and I wrote a story about how one school in Washington state struggled to deal with its students using a nudify app on other students. The story showed how teachers and school administration weren’t familiar with the technology, and initially failed to report the incident to the police even though it legally qualified as “sexual abuse” and school administrators are “mandatory reporters.”
According to the Stanford report, many teachers lack training on how to respond to a nudify incident at their school. A Center for Democracy and Technology report found that 62% of teachers say their school has not provided guidance on policies for handling incidents
involving authentic or AI nonconsensual intimate imagery. A 2024 survey of teachers and principals found that 56 percent did not get any training on “AI deepfakes.” One provider told the authors of the report that while many schools have crisis management plans for “active shooter situations, they had never heard of a school having a crisis management plan for a nudify incident, or even for a real nude image of a student being circulated.”
The report makes several recommendations to schools, like providing victims with third-party counseling services and academic accommodations, drafting language to communicate with the school community when an incident occurs, ensuring that students are not discouraged or punished for reporting incidents, and contacting the school’s legal counsel to assess the school’s legal obligations, including its responsibility as a “mandatory reporter.”
The authors also emphasized the importance of anonymous tip lines that allow students to report incidents safely. It cites two incidents that were initially discovered this way, one in Pennsylvania where a students used the state’s Safe2Say Something tipline to report that students were AI-generating nude images of their peers, and another school in Washington that first learned about a nudify incident through a submission to the school’s harassment, intimidation, and bullying online tipline.
One provider of training to schools emphasized the importance of such reporting tools, saying, “Anonymous reporting tools are one of the most important things we can have in our school systems,” because many students lack a trusted adult they can turn to.
Notably, the report does not take a position on whether schools should educate students about nudify apps because “there are legitimate concerns that this instruction could inadvertently educate students about the existence of these apps.”
From 404 Media via this RSS feed
Aren't these AI programs paid for? Wouldn't they register customer info? How are kids even signing up for these things without their parents cards or whatever? I don't see how there aren't ways to figure this out. It doesn't seem that easy for kids to produce this, there must be ways to make it more difficult.
I don't even use Google, much less use AI, so I genuinely don't even know how this works, and I normally don't like surveillance in any way, but seems kinda weird that they're just immediately throwing their hands up in the air about CSAM of all things.
Many of the ai models can be run locally. And many of the ones that are paid give free trials.
I mean, I could set this shit up in a single afternoon on my laptop fully locally and woth FOSS. And that was 2 years ago when I wanted to experiment with image generation for a while. I imagine going to local route is much easier now.
That's kind of why "solving" AI under capitalism is close to impossible. Everything is a temporary workaround or a bandaid. And by capitalism I don't mean "rule by bourgeois classes" but straight up commodity production. I would be very surprised if this shit didn't start appearing in AES systems as well to some extent.
Well, that leads to me to the other thing I was thinking that maybe it's not even other kids as much as adults making this CSAM.
I wouldn't even know how to locally setup some AI like that, I know kids are usually on the cusp of whatever is being released and know more than some olders like me but I still think it sounds like a bit much for a kid who just wants to see some nudes. These kids who have the CSAM made of them usually have some social media and who knows who's taking their picture and processing this stuff. That will be much more difficult to resolve, if not impossible to a certain degree like you said. But I don't think it sounds impossible to prevent kids from doing this stuff to other kids. They make it sound like this is some naughty kids up to no good when in reality these are tools being utilized by peds, in my opinion.
It wasn't too hard, I just had to follow a step by step guide. I really can't tell you how much easier it is today because I lost interest in image generation after using it like 3 times, but the trajectory of most FOSS software is to just turn them into a convenient package that you just download, then run. For a kid willing to pretty much ruin a classmate's life, it is frighteningly easy. Kids can do a lot of shit with a little dedication.
It will be very difficult to resolve this issue. Criminal prosecutions of the practice involved will likely lead to the same outcomes as other state sanctioned crackdowns on electronic communications. It will barely resolve the actual problem (because the actual criminal activity can be hidden away with some effort) but the state can use it as a pretense to attack things it doesn't like.
In general, police effectiveness in capitalist states for preventing and catching crime tend to be surprisingly limited. And whatever effectiveness they have plumets with regulating electronic communications. Like, the state can't stop the illegal distribution of pirated media, even though protecting property rights is literally the fundamental interest of capitalist states. I somehow doubt that most western countries will really do anything significant to curtail this phenomenon, especially since most of them have an interest in upholding violence against women and especially trans women, who I would not be surprised if they were targeted way more often. Like, is the Trump admin of all people really going to regulate the use of AI to stop revenge porn?
The primary perpetrator of such CSAM humiliation rituals would almost certainly be classmates, who would have a personal motivation to ruin someone's life, as opposed to some stranger pedophile, who has no real reason to share CSAM material amongst the classmates of someone they don't know. Such SA and SV incidents are really a matter of the perpetrator asserting power over the victim, and the violation of intimate boundaries is one of the strongest ways to assert control and dominance.
Yeah, you're very right about the humiliation aspect and the rest, especially if other students are seeing it all too. That's terrible that kids are doing this type of shit at such an early age. At that age, I just wanted to see nudes. Revenge AI porn would've been unthinkable. It's still hard for me to process that kids are thinking like this but you're right.
Ugh. I'm so sorry for young girls going through this shit. Capitalism is terrible. This shit should never have been released to the public without the safety mechanisms in place.
Kinda late reply, but honestly, yeah. The safety mechanisms placed were woefully inadequate, and for all the hype openAI made about "AI taking over humanity and is extremely dangerous", they never bothered to just delay their releases unilt after the appropriate guardrails were laid (which would still only be a bandaid solution).
I've never before seen a company create a marketing campaign about how genuinely dangerous their product is while simultaneously rushing releases as fast as possible and creating as much hype as possible so that everyone will rush to develop and use the dangerous technology more.