-4
submitted 5 days ago* (last edited 5 days ago) by neatchee@piefed.social to c/yepowertrippinbastards@lemmy.dbzer0.com

There's only one mod of !mop@quokk.au

I commented on their meme about Kamala Harris being just as likely to commit war crimes as Trump with an admittedly snarky, sarcastic reply that basically said "some of us wanted to whatever we could, as little as it might be, instead of watching the world burn. Must feel real morally superior safe behind that keyboard"

They banned me from the community for it.

Kinda funny for a community that bills itself as "free from the influence of .ml"

modlog entry showing can of neatchee from mop community

altr

you are viewing a single comment's thread
view the rest of the comments
[-] Grail@multiverse.soulism.net 1 points 4 days ago

I think you're underestimating the role of RLHF.

[-] hendrik@palaver.p3x.de 2 points 4 days ago* (last edited 4 days ago)

I'm not really an expert on all the details. So I might be wrong here. I don't know the percentages of how much is done in pretraining and how much in tuning. But from what I know the neural pathways are established in the pretraining phase. Reportedly that's also where the model learns about the concepts it internalises... Where it gets its world knowledge. So it seems to me a complicated process like learning about a concept like a feeling, or an experience would get established in pretraining already. RLHF is more about what it does with it. But the lines between RLHF, fine-tuning and pretraining are a bit blurry anyway. If I had to guess, I'd say qualia is more likely to be disposed early on, while there's a lot of changes happening to the neural pathways, so in the pretraining. I'm basing that on my belief, that it'll be a complex concept... But ultimately there's no good way to tell, because we don't know how it'd look like for AI.

Furthermore I'd had a bit of a look what weird use cases people have for AI. And I read about the community efforts to make them usable for NSFW stuff. These people teach new concepts to AI models after the fact. Like how human anatomy looks underneath the clothes. The physics of those parts of the body. And turns out it's a major hassle. It might degrade other things. It might just work for something close to what it's seen, so obviously the AI didn't understand the new concept properly... These people tend to fail at more general models, obviously it's hard for AI to learn more than one new concept at a later stage... All these things lead me to believe later stages of training are a bad time for AI to learn entirely new concepts. It seems it requires the groundworks to be there since pretraining. That's probably why we can fine-tune it to prefer a certain style, like Van Gogh drawings. Or a certain way to speak like in RLHF. But not a complicated concept like anatomy. Because the Van Gogh drawings were there in the pretraining dataset already. And they cleaned the nudes. So I'd assume another complicated concept like qualia also needs to come early on. Or it won't happen later.

Edit: YT video about emotion in LLMs and current research: https://m.youtube.com/watch?v=j9LoyiUlv9I

this post was submitted on 07 Apr 2026
-4 points (45.5% liked)

Ye Power Trippin' Bastards

1771 readers
176 users here now

This is a community in the spirit of "Am I The Asshole" where people can post their own bans from lemmy or reddit or whatever and get some feedback from others whether the ban was justified or not.

Sometimes one just wants to be able to challenge the arguments some mod made and this could be the place for that.


Posting Guidelines

All posts should follow this basic structure:

  1. Which mods/admins were being Power Tripping Bastards?
  2. What sanction did they impose (e.g. community ban, instance ban, removed comment)?
  3. Provide a screenshot of the relevant modlog entry (don’t de-obfuscate mod names).
  4. Provide a screenshot and explanation of the cause of the sanction (e.g. the post/comment that was removed, or got you banned).
  5. Explain why you think its unfair and how you would like the situation to be remedied.

Rules


Expect to receive feedback about your posts, they might even be negative.

Make sure you follow this instance's code of conduct. In other words we won't allow bellyaching about being sanctioned for hate speech or bigotry.

YPTB matrix channel: For real-time discussions about bastards or to appeal mod actions in YPTB itself.


Some acronyms you might see.


Relevant comms

founded 2 years ago
MODERATORS