this post was submitted on 20 Jul 2023
26 points (84.2% liked)
Meta (lemm.ee)
3566 readers
3 users here now
lemm.ee Meta
This is a community for discussion about this particular Lemmy instance.
News and updates about lemm.ee will be posted here, so if that's something that interests you, make sure to subscribe!
Rules:
- Support requests belong in !support
- Only posts about topics directly related to lemm.ee are allowed
- If you don't have anything constructive to add, then do not post/comment here. Low effort memes, trolling, etc is not allowed.
- If you are from another instance, you may participate in discussions, but remain respectful. Realize that your comments will inevitably be associated with your instance by many lemm.ee users.
If you're a Discord user, you can also join our Discord server: https://discord.gg/XM9nZwUn9K
Discord is only a back-up channel, [email protected] will always be the main place for lemm.ee communications.
If you need help with anything, please post in !support instead.
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Is that really necessary outside of art communities? That seems like something they could just make a rule about and enforce on their own.
Like if someone generated a picture, and then made a meme out of that picture in a meme community, you'd want that labeled and filtered? Why?
It's a touchy subject for a large contingency of creators and AI rules are enforced by nearly all art communities in there guidelines (if they have them). A rule manually enforced by a large quadrant of communities is a good example of when a feature could be useful.
I'm pretty confident that art communities with AI content rules make up a very small percentage of overall communities, but even if they didn't, it's already either not allowed by them or required to be labeled, so what would this accomplish?
~~Automation~~ Assumed it's already resolved rather then manual work (per community). It would also allow people who are adamently against AI to filter AI content in subs that could now allow them, following the filtering ruleset.
Edit: Automation was a confusing word when not talking about actual automation
I get fighting against industries trying to replace people with AI, but if someone is that sensitive about AI content that they don't ever want to see it in any context anywhere they should probably try to get over that, it isn't going anywhere. People were mad about sound being added to movies too.
How are you going to automate it without AI?
I don't understand what your referring to. I just suggested we create a filter to hide AI generated media rather than community-lead duplication of rulesets for doing so. In not asking for a neural network that automatically detects AI.
"Automation rather then manual work." How can the filter operate, if "AI generated media" are not flagged manually?
Ok, I see. I used a poor choice of words, my mistake. I meant this in terms of moderation work. Since the platform would provide tools for user to mark their work as AI generated, it would reduce the administration per community. I'm struggling to think of a better word? Assumed Resolved?
I was thinking the moderators are going to have to flag posts as AI-generated , I'm not trusting the people posting them to do it. But 100% agree with what you were saying, there should be a way for users and moderators to mark a post as "containing AI-generated content", similar to NSFW; and then a way for us to filter "AI" like we would NSFW.