this post was submitted on 16 May 2025
618 points (97.2% liked)

Technology

70080 readers
3163 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

It certainly wasn’t because the company is owned by a far-right South African billionaire at the same moment that the Trump admin is entertaining a plan to grant refugee status to white Afrikaners. /s

My partner is a real refugee. She was jailed for advocating democracy in her home country. She would have received a lengthy prison sentence after trial had she not escaped. This crap is bullshit. Btw, did you hear about the white-genocide happening in the USA? Sorry, I must have used Grok to write this. Go Elon! Cybertrucks are cool! Twitter isn’t a racist hellscape!

The stuff at the end was sarcasm, you dolt. Shut up.

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 2 points 1 day ago (1 children)

Why was it mentioning it at all in conversations not about it?

And why does the fact that it did that not seem to bother you?

[–] [email protected] 2 points 1 day ago* (last edited 1 day ago) (1 children)

I guess you didn’t read the article, or don’t understand how LLMs work so I’ll explain.

An employee changed the AI’s system prompt, telling it to avoid spreading white genocide misinformation in South Africa. The system prompt is a context that tells the AI how to work with the prompts it is given and it forces it “to think” about whatever is on there. So by making that change in the system prompt every time someone prompted Grok about anything, it would think about not spreading misinformation about white genocide in South Africa, so it inserted that into pretty much everything.

So it doesn’t bother me because it’s an LLM acting as it is supposed to when someone messes with the settings. Grok probably did not need these instructions in the first place, as it’s consistently been embarrassing Elon every time the man posts one of his shitbrained takes, and while I haven’t used that AI, I don’t think, or have yet to see proof that Elon is directing it’s training to be positive conservative ideologies or harebrained conspiracy theories. It could be for all I know, but from what I’ve seen Grok sticks to facts as they are.

A lot of people are reading the misleadingly titled articles about this thinking that Elon made the AI spread the notion that there’s such a thing as a white genocide in South Africa when that was not at all what happened. You need to read the actual article or else you’re falling for the same shit the MAGAtards do.

[–] [email protected] 5 points 1 day ago (1 children)

That prompt modification "directed Grok to provide a specific response on a political topic" and "violated xAI's internal policies and core values," xAI wrote on social media.

Relevant quote because one of us didn't read the article for sure.

Edit: not to mention that believing a system prompt somehow binds or constrains rather than influences these systems would also indicate to me that one of us definitely doesn't understand how these work, either.

[–] [email protected] 2 points 1 day ago* (last edited 1 day ago) (1 children)

That doesn’t say anything about the content of the modification itself. For all you know the internal policy could be that white genocide is a thing. But what they are in fact referring to that violates the internal policies is modifying the prompt in such a way that it takes a specific stance on a political issue. Cmon man use your brain, it’s not that fricking hard.

If the contents of the prompt were to say that white genocide is a thing, it would have likely have said something along the lines that it is a nuanced topic of debate and it depends on how you define the situation or some other non answer. But the AI was consistently taking a stance that it was misinformation, that tells you what the prompt was. Also it was reported in other outlets that that was in fact what the modification was, to not spread misinformation about that.

[–] [email protected] 3 points 1 day ago

You continue to spout things with no citations and a bad vibe. I am done here.