this post was submitted on 07 Apr 2026
-4 points (45.5% liked)
Ye Power Trippin' Bastards
1771 readers
176 users here now
This is a community in the spirit of "Am I The Asshole" where people can post their own bans from lemmy or reddit or whatever and get some feedback from others whether the ban was justified or not.
Sometimes one just wants to be able to challenge the arguments some mod made and this could be the place for that.
Posting Guidelines
All posts should follow this basic structure:
- Which mods/admins were being Power Tripping Bastards?
- What sanction did they impose (e.g. community ban, instance ban, removed comment)?
- Provide a screenshot of the relevant modlog entry (don’t de-obfuscate mod names).
- Provide a screenshot and explanation of the cause of the sanction (e.g. the post/comment that was removed, or got you banned).
- Explain why you think its unfair and how you would like the situation to be remedied.
Rules
- Post only about bans or other sanctions that you have received from a mod or admin.
- Don’t use private communications to prove your point. We can’t verify them and they can be faked easily.
- Don’t deobfuscate mod names from the modlog with admin powers.
- Don’t harass mods or brigade comms. Don’t word your posts in a way that would trigger such harassment and brigades.
- Do not downvote posts if you think they deserved it. Use the comment votes (see below) for that.
- You can post about power trippin’ in any social media, not just lemmy. Feel free to post about reddit or a forum etc.
- If you are the accused PTB, while you are welcome to respond, please do so within the relevant post.
Expect to receive feedback about your posts, they might even be negative.
Make sure you follow this instance's code of conduct. In other words we won't allow bellyaching about being sanctioned for hate speech or bigotry.
YPTB matrix channel: For real-time discussions about bastards or to appeal mod actions in YPTB itself.
Some acronyms you might see.
- PTB - Power-Tripping Bastard: The commenter agrees with you this was a PTB mod.
- YDI - You Deserved It: The commenter thinks you deserved that mod action.
- YDM new - You Deserved More: The commenter thinks you got off too lightly.
- BPR - Bait-Provoked Reaction: That mod probably overreacted in charged situation, or due to being baited.
- CLM - Clueless Mod: The mod probably just doesn't understand how their software works.
Relevant comms
founded 2 years ago
MODERATORS
I think you're underestimating the role of RLHF.
I'm not really an expert on all the details. So I might be wrong here. I don't know the percentages of how much is done in pretraining and how much in tuning. But from what I know the neural pathways are established in the pretraining phase. Reportedly that's also where the model learns about the concepts it internalises... Where it gets its world knowledge. So it seems to me a complicated process like learning about a concept like a feeling, or an experience would get established in pretraining already. RLHF is more about what it does with it. But the lines between RLHF, fine-tuning and pretraining are a bit blurry anyway. If I had to guess, I'd say qualia is more likely to be disposed early on, while there's a lot of changes happening to the neural pathways, so in the pretraining. I'm basing that on my belief, that it'll be a complex concept... But ultimately there's no good way to tell, because we don't know how it'd look like for AI.
Furthermore I'd had a bit of a look what weird use cases people have for AI. And I read about the community efforts to make them usable for NSFW stuff. These people teach new concepts to AI models after the fact. Like how human anatomy looks underneath the clothes. The physics of those parts of the body. And turns out it's a major hassle. It might degrade other things. It might just work for something close to what it's seen, so obviously the AI didn't understand the new concept properly... These people tend to fail at more general models, obviously it's hard for AI to learn more than one new concept at a later stage... All these things lead me to believe later stages of training are a bad time for AI to learn entirely new concepts. It seems it requires the groundworks to be there since pretraining. That's probably why we can fine-tune it to prefer a certain style, like Van Gogh drawings. Or a certain way to speak like in RLHF. But not a complicated concept like anatomy. Because the Van Gogh drawings were there in the pretraining dataset already. And they cleaned the nudes. So I'd assume another complicated concept like qualia also needs to come early on. Or it won't happen later.
Edit: YT video about emotion in LLMs and current research: https://m.youtube.com/watch?v=j9LoyiUlv9I