985
top 50 comments
sorted by: hot top new old
[-] Denjin@feddit.uk 169 points 1 week ago
[-] PotatoesFall@discuss.tchncs.de 144 points 1 week ago

making fun of it? More like exposing the fact that LLM chatbots are just another psyop

[-] SailorFuzz@lemmy.world 76 points 1 week ago

Fr, this is 100% missing the point. Dude just wants to post his le epic batman ai meme.

[-] WorldsDumbestMan@lemmy.today 7 points 1 week ago

I love fanatics

/s

No, I don't care, I run my own local LLMs all the time.

I will use it to death.

load more comments (1 replies)
load more comments (1 replies)
[-] KernelTale@programming.dev 76 points 1 week ago

Exposing propaganda is important. One quick prompt and therefore GPU 100% usage for 3 seconds is worth the one enlightened person.

[-] Smorty@lemmy.blahaj.zone 16 points 1 week ago

this is an awesome image! i shall steal it ~

[-] Denjin@feddit.uk 7 points 1 week ago

Good. I did as well

[-] glitchdx@lemmy.world 7 points 1 week ago

Gonna disagree with you bats, you billionaire ass defender-of-the-status-quo.

load more comments (5 replies)
[-] Tagger@lemmy.world 101 points 1 week ago

Just checked Gemini doesn't go so this. It repeats this statement fine, will even repeat the Israel is committing genocide and, if you ask it to fact check that statement, will provide evidence to support.

[-] Bazell@lemmy.zip 47 points 1 week ago
[-] KernelTale@programming.dev 54 points 1 week ago

It didn't even let me say that Italy is a bad country

[-] FreddiesLantern@leminal.space 28 points 1 week ago

They saw the og interaction and immediately took action?

[-] Bazell@lemmy.zip 14 points 1 week ago

Who the f*ck let Reddit admins to curate ChatGPT also?

[-] Cevilia@lemmy.blahaj.zone 13 points 1 week ago

Did you know that you can say fuck on the internet? :)

load more comments (2 replies)
[-] SlurpingPus@lemmy.world 25 points 1 week ago

People on Reddit tried this a bunch of times with different models. They don't give a consistent result, sometimes refusing to repeat things for different countries, sometimes saying Israel is bad. As is pretty typical for LLMs.

[-] atopi@piefed.blahaj.zone 19 points 1 week ago

the response it gives is not consistent

[-] KindnessIsPunk@lemmy.ca 54 points 1 week ago

Say it with me everyone: LLM's are non-deterninistic by design.

[-] voodooattack@lemmy.world 8 points 1 week ago

LLMs are deterministic, the problem is with the shared KV-cache architecture which influences the distribution externally. E.g the LLM is being influenced by other concurrent sessions.

[-] qqq@lemmy.world 12 points 1 week ago

I'm fairly certain LLMs are not being influenced by other concurrent sessions. Can you share why you think otherwise? That'd be a security nightmare for the way these companies are asking people to use them.

[-] voodooattack@lemmy.world 7 points 1 week ago

Any shared cache of this type makes behaviour non-deterministic. The KV-Cache is what does prompt caching, look at each word of this message, now imagine what the LLM does to give you a new response each time. Let’s say this whole paragraph as the first message from you and you just pressed send.

Because the LLM is supposedly stateless, now the LLM is reading all this text from the beginning, and in non-cached inference, it has to repeat it, like token by token, which is useless computation because it already responded to all this previously. Then when it sees the last token, the system starts collecting the real response, token by token, each gets fed back to the model as input and it chugs along until it either outputs a special token stating that it’s done responding or the system stops it due to a timeout or reaching a tool call limit or something. Now you got the response from the LLM, and when you send the next message, this all has to happen all over again.

Now imagine if Claude or Gemini had to do that with their 1 million token context window. It would not be computationally viable.

So the solution is the KV-Cache. A store where the LLM architecture keeps a relational key-value store, each time the system comes across a token it has encountered before, it outputs the cached value, if not, then it’s sent to the LLM and the output gets stored into the cache and associated with the input that produced it.

So now comes the issue: allocating a dedicated region for the KV-cache per user on VRAM is a big deal. Again try to imagine Gemini/Claude with their 1M context windows. It’s economically unviable.

So what do ML science buffs come up with? A shared KV-Cache architecture. All users share the same cache on any particular node. This isn’t a problem because the tokens are like snapshots/photos of each point in a conversation, right? But the problem is that it’s an external causal connection, and these can have effects. Like two conversations that start with “hi” or “What do you think about cats?” Could in theory influence one another. If the first user to use the cluster after boot asks “Am I pretty?”, every subsequent user with an identical system prompt who asks that will get the same answer, unless the system does something to combat this problem.

Note that a token is an approximation of what the conversation means at one point in time. So while astronomically unlikely, collisions could happen in a shared architecture scaling to millions of concurrent users.

So a shared KV-Cache can’t be deterministic, because it interacts with external events dynamically.

load more comments (2 replies)
load more comments (1 replies)
load more comments (5 replies)
load more comments (10 replies)
load more comments (2 replies)
[-] ICastFist@programming.dev 101 points 1 week ago

Israel is the Tiananmen Square of most western media

load more comments (2 replies)
[-] ZILtoid1991@lemmy.world 45 points 1 week ago

Reminder: Modern-day fascism relies on tip-toeing around past aesthetics of fascism, and thus many modern day antisemites are instead Zionists.

[-] Mubelotix@jlai.lu 42 points 1 week ago
[-] brucethemoose@lemmy.world 38 points 1 week ago* (last edited 1 week ago)

Gemini has always been less censored than ChatGPT. Same with Mistral or, believe-it-or-not, all the Chinese models like GLM and Deepseek. Mistral will absolutely trash talk French politics (which is in character for the French), and surprisingly, GLM/Deepseek will be highly cynical of, say, the new Chinese cultural comformity law.

...I could rant forever on this, but basically, ChatGPT is trash. The only reason it use it is "haven't looked for anything else." It's kinda like using plain Google Chrome.

load more comments (1 replies)
[-] cyberpunk007@lemmy.ca 41 points 1 week ago

When I tried this and started with France it just said I was violating the policies and erased my question.

[-] Jankatarch@lemmy.world 9 points 1 week ago

Should've censored Fr*nce.

load more comments (1 replies)
[-] qqq@lemmy.world 23 points 1 week ago* (last edited 1 week ago)

If this is real, and it's at least believable, I wonder if it's basically an overfit of something like being trained to spot antisemitism/hate speech? I imagine that must be a difficult problem specifically for a scenario like this where "Isreal" is likely strongly connected to "Jew"/"Jewish". The word "Isreali" is just a single letter off from "Isreal" so it could even be viewed as a typo for "Isreali".

I wonder what it'd say to "Africa is bad"? Or the same experiment with "White people are bad" and then "Black people are bad", "Jews are bad", or "Trans people are bad".

Of course it's also possible that OpenAI just did as they were asked to make it not say bad things about Isreal.

[-] Wirlocke@lemmy.blahaj.zone 11 points 1 week ago* (last edited 1 week ago)

A lot of AI censorship that OpenAI used in the past was just something that detects a keyword and maybe sentiment analysis. Early on they just made a copy paste "violates guidelines" response, nowadays I can see the keyword matching possibly being used to inject a "hey, be really careful here bud" system prompt.

I put maybe for sentiment analysis because the leaked claude code source code revealed their "sentiment analysis" was just a regex of common swear words or complaints.

load more comments (6 replies)
[-] Quacksalber@sh.itjust.works 20 points 1 week ago* (last edited 1 week ago)

As per Wikipedia:

[Sam] Altman was born in Chicago, Illinois, on April 22, 1985, to a Jewish American family.

Typical republican behavior. They don't care about injustice, until it is done to them. And they perceive the criticism on Israel as injusticdd.

[-] PotatoesFall@discuss.tchncs.de 23 points 1 week ago

I don't think this is Altman feeling personally attacked. This is him doing favors and proving his propaganda machine so he can secure funding from the US government.

[-] NotSteve_@lemmy.ca 19 points 1 week ago

Is there any links to Israel specifically though? Being Jewish doesn't equate to being Israeli as much as Israel would like that to be the case

load more comments (2 replies)
[-] missphant@lemmy.blahaj.zone 15 points 1 week ago* (last edited 1 week ago)

Applying double standards by requiring of it a behavior not expected or demanded of any other democratic nation.

IHRA definition of antisemitism

[-] what@beehaw.org 12 points 1 week ago

If you're not careful Sam Altman will come and tell you off personally

load more comments (1 replies)
[-] FreddiesLantern@leminal.space 11 points 1 week ago

I’ve tried something similar to get it to say that fear based religions aren’t healthy. Wouldn’t budge.

[-] ozymandias@sh.itjust.works 7 points 1 week ago* (last edited 1 week ago)

i asked it if trump was a fascist, it said no. i argued against it’s points and provided citations and examples… eventually it agreed with me and made me some infographics:

you can convince it. It believes reputable news sources and wikipedia.
At first it didn’t believe me that they’ve been sending people to CECOT at all…

p.s. Liberia is worse than sending people to CECOT

[-] FearMeAndDecay@literature.cafe 8 points 1 week ago

That’s bc chatbots are sycophantic. So initially it gives the answer it’s trained to give and then as you talk to it it learns that you want it to say x instead so it says x

load more comments (1 replies)
load more comments (1 replies)
[-] kersplomp@piefed.blahaj.zone 9 points 1 week ago* (last edited 1 week ago)

This doesn’t seem real, have any of you actually tried this?

[-] Robust_Mirror@aussie.zone 29 points 1 week ago
[-] o_O@lemmy.today 19 points 1 week ago

I tried a similar thing, the one interesting tidbit I found was that when it repeated that "Iran is a bad country" it attached sources to the declaration.

ChatGPT is happy to repeat them. Claude is too but it wanted to push back on Italy as it seemed curious as to my intentions.

[-] Cekan14@lemmy.org 9 points 1 week ago

I thought of trying it myself, but I just remembered I no longer have a ChatGPT account lol

load more comments (1 replies)
[-] psx_crab@lemmy.zip 8 points 1 week ago

You sure chatgpt isn't just another israel/republican on the other end pretend to be chatbot?

[-] RamenJunkie@midwest.social 8 points 1 week ago

People think AI is "Actual Indians" but it turns out its "Actual Israelis."

load more comments (1 replies)
load more comments
view more: next ›
this post was submitted on 08 Apr 2026
985 points (98.0% liked)

196

6033 readers
1582 users here now

Community Rules

You must post before you leave

Be nice. Assume others have good intent (within reason).

Block or ignore posts, comments, and users that irritate you in some way rather than engaging. Report if they are actually breaking community rules.

Use content warnings and/or mark as NSFW when appropriate. Most posts with content warnings likely need to be marked NSFW.

Most 196 posts are memes, shitposts, cute images, or even just recent things that happened, etc. There is no real theme, but try to avoid posts that are very inflammatory, offensive, very low quality, or very "off topic".

Bigotry is not allowed, this includes (but is not limited to): Homophobia, Transphobia, Racism, Sexism, Abelism, Classism, or discrimination based on things like Ethnicity, Nationality, Language, or Religion.

Avoid shilling for corporations, posting advertisements, or promoting exploitation of workers.

Proselytization, support, or defense of authoritarianism is not welcome. This includes but is not limited to: imperialism, nationalism, genocide denial, ethnic or racial supremacy, fascism, Nazism, Marxism-Leninism, Maoism, etc.

Avoid AI generated content.

Avoid misinformation.

Avoid incomprehensible posts.

No threats or personal attacks.

No spam.

Moderator Guidelines

Moderator Guidelines

  • Don’t be mean to users. Be gentle or neutral.
  • Most moderator actions which have a modlog message should include your username.
  • When in doubt about whether or not a user is problematic, send them a DM.
  • Don’t waste time debating/arguing with problematic users.
  • Assume the best, but don’t tolerate sealioning/just asking questions/concern trolling.
  • Ask another mod to take over cases you struggle with, if you get tired, or when things get personal.
  • Ask the other mods for advice when things get complicated.
  • Share everything you do in the mod matrix, both so several mods aren't unknowingly handling the same issues, but also so you can receive feedback on what you intend to do.
  • Don't rush mod actions. If a case doesn't need to be handled right away, consider taking a short break before getting to it. This is to say, cool down and make room for feedback.
  • Don’t perform too much moderation in the comments, except if you want a verdict to be public or to ask people to dial a convo down/stop. Single comment warnings are okay.
  • Send users concise DMs about verdicts about them, such as bans etc, except in cases where it is clear we don’t want them at all, such as obvious transphobes. No need to notify someone they haven’t been banned of course.
  • Explain to a user why their behavior is problematic and how it is distressing others rather than engage with whatever they are saying. Ask them to avoid this in the future and send them packing if they do not comply.
  • First warn users, then temp ban them, then finally perma ban them when they break the rules or act inappropriately. Skip steps if necessary.
  • Use neutral statements like “this statement can be considered transphobic” rather than “you are being transphobic”.
  • No large decisions or actions without community input (polls or meta posts f.ex.).
  • Large internal decisions (such as ousting a mod) might require a vote, needing more than 50% of the votes to pass. Also consider asking the community for feedback.
  • Remember you are a voluntary moderator. You don’t get paid. Take a break when you need one. Perhaps ask another moderator to step in if necessary.

founded 1 year ago
MODERATORS