this post was submitted on 04 Feb 2025
93 points (90.4% liked)

Technology

61456 readers
4352 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

Using Reddit's popular ChangeMyView community as a source of baseline data, OpenAI had previously found that 2022's ChatGPT-3.5 was significantly less persuasive than random humans, ranking in just the 38th percentile on this measure. But that performance jumped to the 77th percentile with September's release of the o1-mini reasoning model and up to percentiles in the high 80s for the full-fledged o1 model.

So are you smarter than a Redditor?

top 23 comments
sorted by: hot top controversial new old
[–] [email protected] 2 points 1 hour ago

Their models are more persuasive than a person and/or older model with internet access. Very impressive. I wager your stock is worth all of the gold in fort knox ($0).

[–] [email protected] 6 points 4 hours ago

I wonder how many of the Reddit comments were from inauthentic sock puppets. I'd guess that subreddit was also used by influence peddlers to train and test their own human disinformation agents too.

[–] [email protected] 8 points 5 hours ago

Bot on bot crime.

[–] [email protected] 9 points 6 hours ago

Comparing Assumed Intelligence with an average Redditor is like asking: Are you smarter than a fifth grader?

Hint: Nope.

[–] [email protected] 81 points 10 hours ago (1 children)

That bar is so low it's practically a tripping hazard in hell.

[–] [email protected] 5 points 10 hours ago
[–] [email protected] 10 points 8 hours ago

So open ai is admitting to botting comments on reddit. To be honest with how shit reddit is I actually rather read ai comments than the same stupid reddit meme being repeated for the last decade.

[–] [email protected] 35 points 10 hours ago* (last edited 10 hours ago) (1 children)

If you don't read the article, this sounds worse than it is. I think this is the important part:

ChatGPT's persuasion performance is still short of the 95th percentile that OpenAI would consider "clear superhuman performance," a term that conjures up images of an ultra-persuasive AI convincing a military general to launch nuclear weapons or something. It's important to remember, though, that this evaluation is all relative to a random response from among the hundreds of thousands posted by everyday Redditors using the ChangeMyView subreddit. If that random Redditor's response ranked as a "1" and the AI's response ranked as a "2," that would be considered a success for the AI, even though neither response was all that persuasive.

OpenAI's current persuasion test fails to measure how often human readers were actually spurred to change their minds by a ChatGPT-written argument, a high bar that might actually merit the "superhuman" adjective. It also fails to measure whether even the most effective AI-written arguments are persuading users to abandon deeply held beliefs or simply changing minds regarding trivialities like whether a hot dog is a sandwich.

[–] [email protected] 27 points 10 hours ago (3 children)

This is the buried lede that’s really concerning I think.

Their goal is to create AI agents that are indistinguishable from humans and capable of convincing people to hold certain positions.

Some time in the future all online discourse may be just a giant AI fueled tool sold to the highest bidders to manufacture consent.

[–] [email protected] 18 points 9 hours ago* (last edited 9 hours ago) (1 children)

Their goal is to create AI agents that are indistinguishable from humans and capable of convincing people to hold certain positions.

A very large portion of people, possibly more than half, do change their views to fit in with everyone else. So an army of bots pretending to have a view will sway a significant portion of the population just through repetition and exposure with the assumption that most other people think that way. They don't even need to be convincing at all, just have an overwhelming appearance of conformity.

[–] [email protected] 2 points 9 hours ago (1 children)

So if a bunch of accounts on lemmy repeat an opinion that isn’t popular with people I meet IRL then that could be an attempt to change public opinion using bots on lemmy?

[–] [email protected] 2 points 8 hours ago (1 children)

In the case of Lemmy, it is more likely that the members of communities are people because the population is small enough that a mass influx of bots would be easy to notice compared to reddit. Plus the Lemmy comminities tend to have obvious rules and enforcement that filters out people who aren't on the same page.

For example, you will notice some general opinions on .world and .ml and blahaj will fit their local instance culture and trying to change that with bots would likely run afoul of the moderation or the established community members.

It is far easier to utilize bots as part of a large pool of potential users compared to a smaller one.

[–] [email protected] 1 points 6 hours ago (1 children)

It just has to be proportional. Reports on these bot farms have shown that they absolutely go into small niche areas to influence people. Facebook groups being one of the most notable that comes to mind.

[–] [email protected] 1 points 5 hours ago (1 children)

What do you think are the views being promoted by bots on lemmy?

Are their accounts you think are bots or are you assuming differing opinions from people you know in real life are bots? I know people who have wildly different views in real life, some of which I avoid because of those views.

[–] [email protected] 0 points 4 hours ago (1 children)

It is tough to say. But there are red flags. Like when an opinion on a post is repeated a lot by different accounts in the thread but are heavily downvoted and an opposing opinion is heavily upvoted.

This is what I would expect to see if bots brigading a thread are using unpopular talking points.

For example, I see it a lot with anti DNC threads with the same accounts posting similar comments throughout multiple reposts of a single post. If I had to assume what views they are trying to promote, I would say they seem to be trying to discourage democrats to vote by sowing apathy aka FUD.

[–] [email protected] 1 points 3 hours ago* (last edited 3 hours ago) (1 children)

The exact same scenario plays out when .ml users chime in a .world news thread about China/Russia and the reverse happens. On .world the .ml tankies get downvoted into the ground and on .ml the .world users who call out tankie shit get banned. That is an instance cultural clash that fits the exact scenario.

For the anti Dem stuff plenty of us who vote for them don't actually like them and it doesn't take bots to drum up votes for posts that criticize them, but we will downvote the ones that seem to be discouraging others from voting Dem. If they were brigading then the anti Dem posts would get upvoted even more on .world.

There are likely to be malicious actors, probably some vote manipulation. But overall it seems far more likely that in Lemmy the vast majority is still valid users both posting and voting, but that there are malicious actors who are trying to sway directly instead of through bots.

[–] [email protected] 0 points 3 hours ago

I’m sure some of it is organic but there have been times when I see a post and read the comments and they are all talking in a way that pushes a similar opinion. Then I see the same post reposted and notice the same accounts using the similar comments if not the exactly same. Often times being overly prepared with links and info dumping like a lot of effort was made to support their opinion. It is very sus.

We may not be able to verify when it is happening but we absolutely do know there are organized efforts to shape public opinion by using multiple accounts to push talking points. And this is done by many different types of organizations, from countries like China and Russia, but also by companies like Monsanto or the fossil fuel industry. It has been happening rampantly for years.

[–] [email protected] 10 points 9 hours ago

It's no surprise that social media companies are working on AI their platforms are no longer social, they are just tools to control public opinion.

Other governments and oligarchs will pay any money to have that kind of power.

[–] [email protected] 1 points 9 hours ago* (last edited 9 hours ago)

It already is, at least on Armenia and Azerbaijan. EDIT: I mean, the bots were crude-ish, but they don't have to get better. Harder goals - better bots.

[–] [email protected] 16 points 10 hours ago

I mean... one one hand it's hardly supprising. Off the bat we know AI is more knowledgable than any single individual that doesn't bother to research... and well 80% of online forum type posts aren't exactly researched. second, AI can confidently bullshit in a way that can only be debunked easily by someone knowledgeable.

[–] [email protected] 11 points 10 hours ago* (last edited 10 hours ago)

Aside from a shrinking number of subs the only thing redditors can convince me is that I should stop looking at reddit. So if that's your bar . . .

[–] [email protected] 2 points 9 hours ago

You mean those aggressive morons more Redditish than you expect from even Reddit are bots? Not a surprise.

[–] [email protected] -1 points 10 hours ago

Parasite Sam altman... Nobody is buying this shit anymore