155

Recently there's been quite a bit of outrage because the developer of Piefed publicly called out the Fediverse Anarchist Flotilla (FAF) for supposedly using LLM for automating instance moderation. and even though many of our admins the larger lemmy community took great lengths to debunk that post, it has become the disinfo that keeps on giving (see https://lemmy.dbzer0.com/post/68749575, https://kolektiva.social/@ophiocephalic/116518887925988112, https://lemmy.dbzer0.com/post/68222242 and more)

After clarifying our position for yet another time, someone suggested we should make an official post and an instance policy to "give me something I can boost as a positive example and a sign that things will be better going forward." and given that this storm-in-a-teacup doesn't seem to be abating as people are all too happy to bring it up again and again to malign the FAF; We're making this post to once and for all clarify this situation.

History

We're not going to rehash the whole drama and the many hit pieces against the FAF in the past two weeks, but I need to post the exact situation as it happened, without speculations and assumptions that people are all too happy to jump to.

  • One of our mods develops a tool to download a user's public posting history through the lemmy API, to be used for evaluating them during moderation and shares it with some people in the admin team as something in progress. This tool does not feed anything to LLMs, it simply downloads the comments locally in a text file for easier review than going via the lemmy GUI.
  • Someone is reported to our instance admins for blatant zionism and genocide apologia.
  • An admin uses the tool to download the accused person's comment history for evaluation
  • A quick evaluation (without LLM) confirms that this is a person that needs to be instance-banned. The moderation decision has now been locked-in at this point.
  • At the same time, that admin was curious to discover if LLMs can used to summarize people's positions so that people can quickly follow-up with mod actions, without having to evaluate everyone's posts manually and reduce the workload of admins writing long justifications)
  • As an experiment, the admin pass the user's comment history through a locally-run open-weights LLM (Qwen) to see the summarized output. It happens to match their own decision.
  • The admin decides the leave the LLM summary in a pastebin along with that user's posting history for reference. As an inside joke, they decide to claim the post was summarized by OpenAI, as they expected only our community would care about this and our stance on corporate-LLMs is well-known at this point.
  • The admin bans that person, providing a link to that pastebin as justification.
  • The admin decides not to continue using LLMs anyway for summaries, for many valid reasons. As evidence see the lack of other pastebins with LLM summaries.

~2 weeks pass...

  • The piefed developer is banned by a different mod in our instance for "zionism". (I put this in quotes as this is one mod's opinion, and not necessarily our instance's position.)
  • The piefed developer apparently starts going through our instance modlogs for banned zionists and parses all their justifications
  • The piefed developer discovers that modlog justification from 2 weeks before with the LLM summary.
  • The piefed developer ask quickly in the common lemmy admin channel about it, at which point our instance admin in question, clarifies that the LLM was not used in the decision-making.
  • The piefed developer does not officially reach to anyone else from our admin team, despite the fact that we've reached out before and asked them to contact us in advance for inter-instance matters to avoid escalations.
  • The piefed developer make the public call-out I linked above as a piece of investigative journalism. The piefed developer does not provide the comments from our team which conflict with their narrative. The piefed developer not ask us for an official statement.
  • The piefed developer to this day has not amended their public call-out from the comments multiple of our admins and lemmy users leave under their post, conflicting with the narrative.

If you feel I've misrepresented any steps of this history, please let us know and I'll be happy to adjust.

Given that, we acknowledge that even though we didn't use LLMs in moderations, we allowed it to appear as if we did, and that's on us. We will of course not do the same mistake again (appear as to be using LLMs for moderation)

The FAF's stance on LLM moderation

We are aware that our instance is seen as "LLM-friendly" due to our nuanced take on LLMs but that does not mean that we, as an instance, ever considered using LLMs for moderating our instance. So we want to make it absolutely crystal clear how we stand on the matter.

As an official policy:

  • We have never used LLMs to guide our moderation decisions. This includes using LLM summaries which we would then validate, as well as LLM summaries which we use to confirm our existing decisions. LLMs are just not in our moderation loop whatsoever.
  • We have never passed instance data to corporate LLMs.
  • We have not used any automated moderation tooling which utilizes LLMs. The closest we have is the FOSS anti-CSAM filter I've developed and shared for years now, which relies strictly on locally-hosted machine-vision models.
  • We have never officially considered using LLMs for moderation, nor do we plan to.
  • As a team we're steadfastly against LLM for moderation due to its inherent biases.
  • If any of the above changes, we will publicly inform the FAF community.

We hope this can finally put this matter to rest.

you are viewing a single comment's thread
view the rest of the comments
[-] naevaTheRat@lemmy.dbzer0.com 6 points 4 hours ago

And yet from neither can you provide a citation of "human cognition is just pattern matching in a loop"

[-] troed@fedia.io -5 points 4 hours ago

The difference between you and me is that I've studied the subject. You have not. It's not on me to teach you the contents of the literature.

Go be annoying somewhere else.

[-] alsaaas@lemmy.dbzer0.com 2 points 1 hour ago

I've studied the subject

Meme using the captions: Source? This was once revealed to me in a dream

[-] GeeDubHayduke@lemmy.dbzer0.com 2 points 1 hour ago

Did you really just drop "I've done my research" here?! Lol! Bet you're an immunologist, too. And a lawyer on top.

[-] troed@fedia.io 0 points 57 minutes ago

You lost the bet - where do I get my payout?

Your cognitive bias is known as "Out-Group Homogeneity Bias". Enjoy.

https://dictionary.apa.org/outgroup-homogeneity-bias

[-] GeeDubHayduke@lemmy.dbzer0.com 1 points 48 minutes ago

That's not a link to your blog? Don't real researchers cite themselves? You're falling off, friend.

[-] troed@fedia.io 1 points 41 minutes ago

Was your cognitive dissonance slightly alleviated by what you believe to be a smart comeback?

hugs

[-] kernelle@lemmy.dbzer0.com 4 points 3 hours ago

You do realise saying "I've studied the subject" has no credibility behind it whatsoever?

If you've truly studied the subject you'd be able to explain your rational, not lash out against people asking for you to clarify.

As far as this thread is concerned you've read a book once and it made you an armchair expert. Probably the worst pseudo-intellectualism I've seen on here for a long while.

[-] troed@fedia.io -1 points 2 hours ago

I don't care. See how easy it is? Either you're interested in the subject and you would already know that what I wrote is completely uncontroversial, or you spend time making ignorant posts because a simple fact disagrees with your feels.

[-] kernelle@lemmy.dbzer0.com 2 points 2 hours ago

When a colourblind person tells you the sky is green, do you call them uninformed and tell them to "go read a book", or do you notice their world view is different from yours and try to figure out why? Maybe discover tritanopia in the process.

Right, "you don't care"

[-] troed@fedia.io -1 points 2 hours ago

Not a single person who has commented is interested in an actual discussion regarding the science on consciousness. It's all this: https://blog.troed.se/posts/the-coming-cognitive-disbelief/

[-] kernelle@lemmy.dbzer0.com 1 points 50 minutes ago

A lot of responses to you wanted to have a discussion about it, you're shooting them down before they even happen by not providing a basis for your argument.

Although I agree with your premise to a point, it's crazy to regard a philosophical argument as absolute truth.

[-] troed@fedia.io 1 points 43 minutes ago

I've sourced two of the foremost specialists on the subject. Blackmore's "Consciousness: An Introduction" amounts to a full university semester on the subject. No, I don't really see it as my job to condense that down in a post here. Anyone who's actually interested can start with reading up summaries that are available freely online instead of posting bad takes at me.

[-] naevaTheRat@lemmy.dbzer0.com 6 points 4 hours ago

If you had studied you would know that when you make extremely strong claims you must back them up with more evidence than "here is a book". The typical fashion is to provide a quotation, or chapter/page reference making it easy to demonstrate that you're not talking out of your arse.

Of course no serious person actually thinks human brains are "just pattern matchers in a loop" because that statement is silly, it's not even clear what that would mean. So of course you can't cite someone saying that.

[-] troed@fedia.io -5 points 4 hours ago
[-] naevaTheRat@lemmy.dbzer0.com 6 points 4 hours ago

Just say you exaggerated, there's no need to resort to insults lol. You said something stupid and you can't back it up so you're mad at me.

this post was submitted on 13 May 2026
155 points (97.0% liked)

/0

2166 readers
254 users here now

Meta community. Discuss about this lemmy instance or lemmy in general.

Service Uptime view

founded 2 years ago
MODERATORS