this post was submitted on 28 Sep 2023
141 points (85.8% liked)

No Stupid Questions

35700 readers
2460 users here now

No such thing. Ask away!

!nostupidquestions is a community dedicated to being helpful and answering each others' questions on various topics.

The rules for posting and commenting, besides the rules defined here for lemmy.world, are as follows:

Rules (interactive)


Rule 1- All posts must be legitimate questions. All post titles must include a question.

All posts must be legitimate questions, and all post titles must include a question. Questions that are joke or trolling questions, memes, song lyrics as title, etc. are not allowed here. See Rule 6 for all exceptions.



Rule 2- Your question subject cannot be illegal or NSFW material.

Your question subject cannot be illegal or NSFW material. You will be warned first, banned second.



Rule 3- Do not seek mental, medical and professional help here.

Do not seek mental, medical and professional help here. Breaking this rule will not get you or your post removed, but it will put you at risk, and possibly in danger.



Rule 4- No self promotion or upvote-farming of any kind.

That's it.



Rule 5- No baiting or sealioning or promoting an agenda.

Questions which, instead of being of an innocuous nature, are specifically intended (based on reports and in the opinion of our crack moderation team) to bait users into ideological wars on charged political topics will be removed and the authors warned - or banned - depending on severity.



Rule 6- Regarding META posts and joke questions.

Provided it is about the community itself, you may post non-question posts using the [META] tag on your post title.

On fridays, you are allowed to post meme and troll questions, on the condition that it's in text format only, and conforms with our other rules. These posts MUST include the [NSQ Friday] tag in their title.

If you post a serious question on friday and are looking only for legitimate answers, then please include the [Serious] tag on your post. Irrelevant replies will then be removed by moderators.



Rule 7- You can't intentionally annoy, mock, or harass other members.

If you intentionally annoy, mock, harass, or discriminate against any individual member, you will be removed.

Likewise, if you are a member, sympathiser or a resemblant of a movement that is known to largely hate, mock, discriminate against, and/or want to take lives of a group of people, and you were provably vocal about your hate, then you will be banned on sight.



Rule 8- All comments should try to stay relevant to their parent content.



Rule 9- Reposts from other platforms are not allowed.

Let everyone have their own content.



Rule 10- Majority of bots aren't allowed to participate here.



Credits

Our breathtaking icon was bestowed upon us by @Cevilia!

The greatest banner of all time: by @TheOneWithTheHair!

founded 1 year ago
MODERATORS
 

There has been a ton of CSAM and CP arrests in the US lately, especially from cops and teachers, along with at least one female teacher seducing boys as young as 12. I cannot understand the attraction to kids. Even teens. Do these people think they are having a relationship, or it is somehow okay to take away another human beings' innocence? Is it about sex, power, or WTH is it? Should AI generated CSAM and CP be treated the same as a real person since it promotes the same issues? I am a grandfather, and I am worried about how far this will go with the new AI being able to put anyone's face into a Porno movie too.

It seems to me that a whole new set of worldwide guidelines and laws need to be put into effect asap.

How difficult would it be for AI photo apps to filter out words, so someone cannot make anyone naked?

you are viewing a single comment's thread
view the rest of the comments
[โ€“] [email protected] 1 points 1 year ago* (last edited 11 months ago) (1 children)

[This comment has been deleted by an automated system]

[โ€“] [email protected] 1 points 1 year ago

You are correct, CLIP can misinterpret things, which is where human intelligence comes in. Having CLIP process the probabilities for the terminology that you describe what you are looking for then utilizing a bit of heuristics can go a long way. You don't need to train it to recognize a nude child because it has been trained to recognize a child, and it has been trained to recognize nudity, so if an image scores high in "nude" and "child" just throw it out. Granted, it might be a picture of a woman breastfeeding while a toddler looks on, which is inherently not child pornography, but unless that is the specific image that is being prompted for, it is not that big of a deal to just toss it. We understand the conceptual linking so we can set the threshold parameters and adjust as needed.

As for the companies, it is a tough world surrounding it. The argument of a company that produced a piece of software being culpable for the misuse of said software is a very tenuous one. There have been attempts to make gun manufacturers liable for gun deaths (especially handguns since they really only have the purpose of killing humans). This one I can see, as the firearm killing a person is not a "misuse", indeed, it is the express purpose for it's creation. But what this would be would be more akin to wanting to hold Adobe liable for the child pornography that is edited in Lightroom, or Dropbox liable for someone using Dropbox API to set up a private distribution network for illicit materials. In reality, as long as the company did not design a product with the illegal activity expressly in mind, then they really shouldn't be culpable for how people use it once it is in the wild.

I do feel like more needs to be done to make public the training data for public inspection, as well as forensic methods for interrogating the end products to figure out if they are lying and hiding materials that were used for training. That is just a general issue though that covers many of the ethical and legal issues surrounding AI training.