view the rest of the comments
MeanwhileOnGrad

"Oh, this is calamity! Calamity! Oh no, he's on the floor!"
Welcome to MoG!
Meanwhile On Grad
Documenting hate speech, conspiracy theories, apologia/revisionism, and general tankie behaviour across the fediverse. Memes are welcome!
What is a Tankie?
Alternatively, a detailed blog post about Tankies.
(caution of biased source)
Basic Rules:
Sh.itjust.works Instance rules apply! If you are from other instances, please be mindful of the rules. — Basically, don't be a dick.
Hate-Speech — You should be familiar with this one already; practically all instances have the same rules on hate speech.
Apologia — (Using the Modern terminology for Apologia) No Defending, Denying, Justifying, Bolstering, or Differentiating authoritarian acts or endeavours, whether it be a Pro-CCP viewpoint, Stalinism, Islamic Terrorism or any variation of Tankie Ideology.
Revisionism — No downplaying or denying atrocities past and present. Calling Tankies shills, foreign/federal agents, or bots also falls under this rule. Extremists exist. They are real. Do not call them shills or fake users, as it handwaves their extremism.
Off-topic Discussion — Do not discuss unrelated topics to the point of derailing the thread. Stay focused on the direct content of the post, rather than engaging in arguments that lack mutual agreement.
Brigading/Trolling — If you're here because this community was linked in another thread, please refrain from maliciously voting, commenting, or manipulating the post in any way. This includes alt accounts. All votes are public, and if you are found to be brigading, you will be banned. Good-faith and honest communication is an exception.
Tankies can explain their views, but may be criticised or challenged for them. Any minor infraction of the rules may result in a warning and possibly a temporary ban.
You'll be warned if you're violating the instance and community rules. Continuing poor behaviour after being warned will result in a ban or removal of your comments. Bans typically last only 24 hours, but each subsequent infraction doubles the duration. Depending on the content, the ban time may be increased. You may request an unban at any time.
Given how LLMs always try to please, asking them to find something specific is very likely to result in just that. A more neutral query would be to ask to simply summarize the history, though it's weird that a purportedly anarchist group would be relying on a completely centralized corporate LLM in its moderation. Even then, you need to be careful not to be swayed by the LLMs suggestions and analyze each highlighted comment in an unbiased manner.
What is weird is that its clear they didn't decide to ban because of an LLM output and they used local and FOSS models to summarize the explanation. The explanation provided, instead of just 'Zionism' or 'Rule#X' or 'genocide denial', is what used the LLM.
Their own caption says it was ChatGPT and I don't believe it can be run locally. Either way, one of the many issues with LLMs is that they come across like a person and are convincing even when hallucinating. Couple that with the human psychology of generally taking people for their word, and you have created a perfect environment for being manipulated. Going by the linked thread, in one instance, the AI included a quote that wasn't the exact quote. You could argue the actual comment wasn't that different from it, but that's more like confirmation bias. Even asking an AI to not comment on anything and just distill the provided content to the most important quotes would be affected by selection bias, but that's not even how they used the model. They literally asked it to find what they were looking for.
Right, and the LLM output isn't something I would've used either. The only thing I am noting as important is that the LLM stuff was an extra and unnecessary step, if that makes sense.
I don't know anything about that banned user, but perhaps it's that they didn't want to look like the main characters of this community who essentially ban "just because" and needed something they believed would be convincing?
Uh sure, that's a plausible way of looking at it.