this post was submitted on 28 Apr 2025
137 points (97.2% liked)

Progressive Politics

2565 readers
351 users here now

Welcome to Progressive Politics! A place for news updates and political discussion from a left perspective. Conservatives and centrists are welcome just try and keep it civil :)

(Sidebar still a work in progress post recommendations if you have them such as reading lists)

founded 2 years ago
MODERATORS
 

The researchers' bots generated identities as a sexual assault survivor, a trauma counselor, and a Black man opposed to Black Lives Matter.

archive link

top 30 comments
sorted by: hot top controversial new old
[–] [email protected] 1 points 2 hours ago

So one bot changed another bots mind?

[–] [email protected] 44 points 14 hours ago (3 children)

So the issue here is it was AI this time?

There have been effort by individuals and coordinated groups doing this kind of thing forever. It’s a reminder that you should not fully form your opinion on comments/posts on social media alone.

[–] [email protected] 8 points 9 hours ago (1 children)

I fully agree with this and formed my opinion based on your comments and posts. I'll be getting all my opinions from you from now on.

[–] [email protected] 5 points 8 hours ago (1 children)

Oh I wouldn’t do that. That guy is a moron.

[–] [email protected] 2 points 3 hours ago

As you command

[–] [email protected] 23 points 13 hours ago* (last edited 13 hours ago) (2 children)

The study showed that the AI bots were between 3 and 6 times more persuasive than humans. And the study also mentioned that their bots were not recognized as AI, ever. Not once.

We are now at the point where the side with the most AI wins elections.

Guess which side has the most AI.

[–] [email protected] 4 points 3 hours ago

Democrats can start winning whenever they choose. They just choose wealthy donors over messaging that would win elections. Bots would be irrelevant.

[–] [email protected] 31 points 13 hours ago (2 children)

As was pointed out in the actual reddit discussions, ChangeMyView has a strict "do not accuse the other person of being a bot or troll" rule. So the whole "no one knew it was a bot" part has very little merit.

[–] [email protected] 14 points 13 hours ago

I thought that sounded fishy. It's common knowledge that everyone on the internet except for you is a bot so a contentious discussion on reddit without anyone accusing one of the primary commenters of being a bot seemed questionable without that bit of additional information.

[–] [email protected] 4 points 12 hours ago

lol, good to know!

[–] [email protected] 18 points 14 hours ago (3 children)

The issue is, with AI you can have an agent personalized for every person, pretending to be their friend, manipulating them individually, to move society in a way that's negative for 99.999% of people.

[–] [email protected] 14 points 13 hours ago

Maybe the real friends were the AI bots we made along the way.

[–] [email protected] 4 points 13 hours ago

I think that’s hyperbolic (at least at today’s “AI” capabilities) all of the posts and comment in this ”research” were reviewed and posted by a human researcher.

Even if the technology improves to the point where that is no longer necessary. Even the billionaires are finding out you only need a few bad eggs to spoil the basket

[–] [email protected] 2 points 13 hours ago

Really? I can make an agent that can convince me it's my friend?

[–] [email protected] 22 points 14 hours ago (2 children)

I wonder if any are on Lemmy doing that mess?

[–] [email protected] 29 points 14 hours ago (2 children)

They defininetly are, every social platform they can. youtube, facebook, ticktock, twitter instagram, line, bluesky, etc. If they can get usefull training data for their AI they will infect any and EVERY place people interact with. Heck theres even those ones in the past done on 4chan by a youtuber.

[–] [email protected] 8 points 14 hours ago (1 children)

These meta/palentir/alphabet people think by appropriating our minds and fusing themselves with silicone, they will become the omniscient gods that never die. Imagine when the ai parts of themselves discover organic material degrades, and decide to self-repair.

[–] [email protected] 6 points 12 hours ago (1 children)

From the moment I understood the weakness of my flesh, it disgusted me. I craved the strength and certainty of steel. I aspired to the purity of the Blessed Machine. Your kind cling to your flesh, as though it will not decay and fail you. One day the crude biomass you call a temple will wither, and you will beg my kind to save you. But I am already saved, for the Machine is immortal… Even in death I serve the Omnissiah.

[–] [email protected] 1 points 12 hours ago

All things come into being and pass away. There is a natural balance and order to the universe and interference will lead to natural resets.

[–] [email protected] 6 points 14 hours ago

I mean it’s not unique to AI training. Fake profiles/catfishing have been around since the dawn of the internet.

[–] [email protected] 2 points 11 hours ago* (last edited 11 hours ago)

Absolutely, certain communities in all of the most popular instances seem to almost exclusively host bot interactions. Some other very popular communities have a more even distribution of humans to bots.

[–] [email protected] 12 points 13 hours ago

AKA a “Propaganda campaign.” Or as the kids call it now, a ”Psy-Ops Mission.”

[–] [email protected] 10 points 13 hours ago (2 children)

A team of researchers who say they are from the University of Zurich ran an “unauthorized,” large-scale experiment in which they secretly deployed AI-powered bots into a popular debate subreddit called r/changemyview in an attempt to research whether AI could be used to change people’s minds about contentious topics.

Ok, but did they have to make the bots argue for so many shitty positions?

a “Black man” who was opposed to the Black Lives Matter movement

a bot who suggested that specific types of criminals should not be rehabilitated

This is pretty clearly an attempt to see if AI can make the world worse.

[–] [email protected] 1 points 16 minutes ago

Because it is easy to find people who hold opinions contrary to those, since the contrary are socially acceptable opinions to hold in reddit-space. This makes running the experiment easier.

[–] [email protected] 8 points 13 hours ago

Alternatively, methods to increase engagement

[–] [email protected] 10 points 14 hours ago

There are for sure unauthorized AI companies doing exactly that so I am grateful for the spotlight.

[–] [email protected] 8 points 13 hours ago (1 children)
[–] [email protected] 1 points 2 hours ago

Getting there

[–] [email protected] 2 points 11 hours ago

When? I didn't see a specific time period in the article-preview.

[–] [email protected] 1 points 10 hours ago

IMO this get caught just because done by "academic" reserchers instead of corporate ones. The frontline of LLM development shifted to commercial companies because money and their lack of understanding on ethics and boundaries, now academics are catching up I guess.