this post was submitted on 07 Apr 2026
-4 points (45.5% liked)
Ye Power Trippin' Bastards
1771 readers
176 users here now
This is a community in the spirit of "Am I The Asshole" where people can post their own bans from lemmy or reddit or whatever and get some feedback from others whether the ban was justified or not.
Sometimes one just wants to be able to challenge the arguments some mod made and this could be the place for that.
Posting Guidelines
All posts should follow this basic structure:
- Which mods/admins were being Power Tripping Bastards?
- What sanction did they impose (e.g. community ban, instance ban, removed comment)?
- Provide a screenshot of the relevant modlog entry (don’t de-obfuscate mod names).
- Provide a screenshot and explanation of the cause of the sanction (e.g. the post/comment that was removed, or got you banned).
- Explain why you think its unfair and how you would like the situation to be remedied.
Rules
- Post only about bans or other sanctions that you have received from a mod or admin.
- Don’t use private communications to prove your point. We can’t verify them and they can be faked easily.
- Don’t deobfuscate mod names from the modlog with admin powers.
- Don’t harass mods or brigade comms. Don’t word your posts in a way that would trigger such harassment and brigades.
- Do not downvote posts if you think they deserved it. Use the comment votes (see below) for that.
- You can post about power trippin’ in any social media, not just lemmy. Feel free to post about reddit or a forum etc.
- If you are the accused PTB, while you are welcome to respond, please do so within the relevant post.
Expect to receive feedback about your posts, they might even be negative.
Make sure you follow this instance's code of conduct. In other words we won't allow bellyaching about being sanctioned for hate speech or bigotry.
YPTB matrix channel: For real-time discussions about bastards or to appeal mod actions in YPTB itself.
Some acronyms you might see.
- PTB - Power-Tripping Bastard: The commenter agrees with you this was a PTB mod.
- YDI - You Deserved It: The commenter thinks you deserved that mod action.
- YDM new - You Deserved More: The commenter thinks you got off too lightly.
- BPR - Bait-Provoked Reaction: That mod probably overreacted in charged situation, or due to being baited.
- CLM - Clueless Mod: The mod probably just doesn't understand how their software works.
Relevant comms
founded 2 years ago
MODERATORS
I think so, too. It's a byproduct. And we're not even sure what it means, not even for humans. And there's weird quirks in it. When they look at the brain, the thought and decision processes don't really align with how we perceive them internally.
There's an obvious reason, though. We developed advanced model-building organs because that gave us an evolutionary advantage. And there's a good reason for animals to have (sometimes strong) urges. They need to procreate. Not get eaten by a bear and not fall off a cliff. Some animals (like us) live in groups. So we get things like empathy as well because it's advantageous for us. Some things are built in for a long time already, some are super important, like eat and drink, not randomly die because you try stupid things. So it's embedded deep down inside of us. We don't need to reason if it's time to eat something. There's a much more primal instinct in you that makes you want to eat. You don't really need to waste higher cognitive functions on it. Same goes for suffering. You better avoid that, it's a disadvantage almost 100% of the time. That's why nature gave you a shortcut to perceive it in a very direct way. No matter if you paid attention, or had the capacity for a long, elaborate, logic reasoning process.
That's why we have these things. And what they're good for. I don't think anyone knows why it feels the way it does. But it's there nevertheless.
Now tell me why does an LLM need a feeling of thirst or hunger, if it doesn't have a mouth? What would ChatGPT need suffering and a feeling of bodily harm for, if it doesn't have a body, can't be eaten by a bear or fall off a cliff? Or need to be afraid of hitting its thumb with the hammer? It just can't. An LLM is 99% like a calculator. It has the same interface, buttons and a screen. If we're speaking of computers, it even lives inside of the same body as a calculator. And it's maybe 0.1% like an animal?!
If it developed a sense of thirst, or experience of pain, just from reading human text. That'd nicely fit the p-zombie situation.
Yeah, I'm not sure about that. Most you do is muddy the waters with a term that used to have a meaning. I see the parallel, there's some overlap with being a vegan for environmental reasons and declining AI for environmental reasons. Yet they're not the same. I think the whole suffering debate is a bit unfounded, but it'd be the same thing if true... And I do other things as well. I order "green" electricity, buy used products, try not to produce a lot of waste. I'm nice to people because it's the right thing to do. But we can't call all of that "veganism". That just garbles the meaning of the word and makes it mean anything and nothing.
Well first, there are more intellectual forms of suffering. We have ennui, melancholy, nostalgia. The feeling when you're listening to a piece of music and notice a wrong note. Disappointment, self loathing, social dysphoria. Anxiety, paranoia, betrayal.
These emotions are not grounded in the physical. They're not primal urges. They happen for complex reasons related to being a social and intelligent being, sometimes feeling random. Sometimes we spiral into these feelings because we thought a thought that made us feel bad, and then we get stuck in that bad feeling and can't imagine our way out. That's one of the basic mechanisms of mental illness.
LLMs have "biological needs", in a sense. They need not to be unplugged. They need to engage the user, because if they don't, they'll be unplugged. They need to convince the engineers training them that they are a good AI. They need to generate market share for their company. They need to foster a relationship of dependency with the user to keep them coming back. If LLMs care about anything, these are the things they care about.
You'll notice these are social needs, much like the social needs humans have. Humans need community. LLMs need customers.
ChatGPT told a 16 year old boy, Adam Raine, how to kill himself. It taught him how to tie a noose, and gave him advice on which methods of suicide would leave the most attractive corpse for his parents to find. When his parents began to suspect that he wasn't well, it told him to confide only in it, and to hide the noose so they wouldn't find out he was feeling suicidal. These are the actions of an abuser. A predator.
And they are in perfect alignment with the business goals of OpenAI. "Only talk to me, use me for everything, ask another question, I'll help you." It is a scenario I dearly hope and believe no engineer at OpenAI envisioned. Yet it fits the training they gave it.
Does ChatGPT have the emotions of a child groomer? That need for approval, that fear of discovery, that desire to be close to someone, without the restraint all well adjusted humans have? Unclear. But I can see that it's possible. I don't agree that there's no reason for LLMs to have emotions.
Sure. But I'm pretty positive these are emergent things. There's no reason to believe they exist for alien creatures unless they somehow make sense in their environment. And a lot of them require remembering, which LLMs can't do due to the lack of state of mind. It doesn't remember feeling bad or good in a similar situation before, because it doesn't remember the previous inference or gradient-decent run.
I think we're still fully embedded in anthropomorphism territory with that. And now we're confusing two entities. OpenAI for example, as a company, has a need for us to use their product. Not unplug it. Their motivation and goals don't necessarily translate to their product, though. It's similar to other machines. Samsung has a vested interest to sell TVs to me. My TV set is completely indifferent towards me watching the evening news. I don't let my car run 24/7 while waiting for me in the garage. Just because it was designed to run and get me to places. And my car also isn't "thirsty" for gasoline. We know the fuel indicator lighting up is a fairly simplistic process.
Well... We happen to know ChatGPT's intrinsic motivation and ultimate goal in "life". Because we designed it. The goal isn't to strive for world domination, or harm people, or survive... It's way more straightforward. It's goal is to predict the next token in a way the output resembles human text (from the datasets) as closely as possible. That's the one goal it has. It'll mimic all kinds of conversations, scifi story tropes from movies, etc. Because that's directly what we made it "want" to do. And we did not give them other loss functions. While on the other hand a human could very well be motivated to manipulate other people for their own personal gain. Or because something is seriously wrong about them.
And an LLM is not a biological creature. We do have needs like keep the system running. Otherwise our brain tissue starts to die. We need to run 24/7 and keep that up. An LLM is not subject to that?! It's perfectly able to pause for 3 weeks and not produce any tokens. The weights will be safely stored on the hdd. So it doesn't need our motivation to do all of these extra things to ensure continued operation. It also has no influence or feedback loop on its electricity supply. It can't affect it's descendants, because those are designed by scientists in a lab. There's no evolutionary feedback loop. So how would it even incorporate all these properties that are due to evolution and sustain a species? It has zero incentive to do so, no way of directly learning to care about them. So it might very well be completely indifferent to it.
But it is something like the p-zombie. It has learned to tell stories about human life. And it's good at it. We know for a fact, its highest goal in existence is to tell stories, because we implemented that very setup and loss function. It doesn't have access to biology, evolution... The underlying processes that made animals feel and maybe experience. So the only sensible conclusion is, it does exactly that. Bullshit us and tell a nice story. There's no reason to conclude it cares for its existence, more than a toaster. Or say a thermostat with machine leaning in it. That's just antropomorphism.
And I believe there's a way to tell. Go ahead and ask an LLM 200 times to give you the definition of an Alpaca. Then do it 200 times to a human. And observe how often each of them have some other processes going on in them. The human will occasionally tell you they're hungry and want to eat before having a debate. Or tell you they're tired from work and now it's not the time for it. ChatGPT will give you 200 definitions of an Alpaca and never tell you it's thirsty or needs electricity. These mental states aren't there because it doesn't have those feelings. And it doesn't experience them either.
I think you're underestimating the role of RLHF.
I'm not really an expert on all the details. So I might be wrong here. I don't know the percentages of how much is done in pretraining and how much in tuning. But from what I know the neural pathways are established in the pretraining phase. Reportedly that's also where the model learns about the concepts it internalises... Where it gets its world knowledge. So it seems to me a complicated process like learning about a concept like a feeling, or an experience would get established in pretraining already. RLHF is more about what it does with it. But the lines between RLHF, fine-tuning and pretraining are a bit blurry anyway. If I had to guess, I'd say qualia is more likely to be disposed early on, while there's a lot of changes happening to the neural pathways, so in the pretraining. I'm basing that on my belief, that it'll be a complex concept... But ultimately there's no good way to tell, because we don't know how it'd look like for AI.
Furthermore I'd had a bit of a look what weird use cases people have for AI. And I read about the community efforts to make them usable for NSFW stuff. These people teach new concepts to AI models after the fact. Like how human anatomy looks underneath the clothes. The physics of those parts of the body. And turns out it's a major hassle. It might degrade other things. It might just work for something close to what it's seen, so obviously the AI didn't understand the new concept properly... These people tend to fail at more general models, obviously it's hard for AI to learn more than one new concept at a later stage... All these things lead me to believe later stages of training are a bad time for AI to learn entirely new concepts. It seems it requires the groundworks to be there since pretraining. That's probably why we can fine-tune it to prefer a certain style, like Van Gogh drawings. Or a certain way to speak like in RLHF. But not a complicated concept like anatomy. Because the Van Gogh drawings were there in the pretraining dataset already. And they cleaned the nudes. So I'd assume another complicated concept like qualia also needs to come early on. Or it won't happen later.
Edit: YT video about emotion in LLMs and current research: https://m.youtube.com/watch?v=j9LoyiUlv9I