this post was submitted on 14 May 2025
52 points (98.1% liked)

Ars Technica - All Content

191 readers
29 users here now

All Ars Technica stories

founded 9 months ago
MODERATORS
 

Users on X (formerly Twitter) love to tag the verified @grok account in replies to get the large language model's take on any number of topics. On Wednesday, though, that account started largely ignoring those requests en masse in favor of redirecting the conversation towards the topic of alleged "white genocide" in South Africa and the related song "Kill the Boer."

Searching the Grok account's replies for mentions of "genocide" or "boer" currently returns dozens if not hundreds of posts where the LLM responds to completely unrelated queries with quixotic discussions about alleged killings of white farmers in South Africa (though many have been deleted in the time just before this post went live; links in this story have been replaced with archived versions where appropriate). The sheer range of these non-sequiturs is somewhat breathtaking; everything from questions about Robert F. Kennedy Jr.'s disinformation to discussions of MLB pitcher Max Scherzer's salary to a search for new group-specific put-downs, see Grok quickly turning the subject back toward the suddenly all-important topic of South Africa.

It's like Grok has become the world's most tiresome party guest, harping on its own pet talking points to the exclusion of any other discussion.

Read full article

Comments


From Ars Technica - All content via this RSS feed

top 4 comments
sorted by: hot top controversial new old
[–] [email protected] 19 points 2 days ago (1 children)

It’s funny in a somewhat horrifying way, because these attempts to control the narrative are so hamfisted that it’s laughable.

I’m worried about the near day when AI is good enough to subtly inject disinformation created even by incompetent idiots like this.

[–] [email protected] 9 points 2 days ago (1 children)

The sad part is, however hamfisted they may be, they work.

[–] [email protected] 2 points 2 days ago (1 children)

And they keep the same pattern. Make their intention known (mostly subtly), spread it, and when the damage is already done, they "fix" the bug. Then run a astroturf PR campaign sealioning/dismissing any wrongdoing.

Rinse and repeat.

[–] [email protected] 1 points 1 day ago

I know multiple people using this ai platform every day, imagine how easy it will be behind the scenes to keep adding little bits of misinformation to change their worldviews. Like fox news but sneakier and more powerful. I'm almost positive we will be the ones shunned for our disapproval of AI, even though it's clearly not good for humanity.