this post was submitted on 17 Sep 2024
125 points (95.0% liked)

science

14722 readers
641 users here now

just science related topics. please contribute

note: clickbait sources/headlines aren't liked generally. I've posted crap sources and later deleted or edit to improve after complaints. whoops, sry

Rule 1) Be kind.

lemmy.world rules: https://mastodon.world/about

I don't screen everything, lrn2scroll

founded 1 year ago
MODERATORS
 

From the article:

This chatbot experiment reveals that, contrary to popular belief, many conspiracy thinkers aren't 'too far gone' to reconsider their convictions and change their minds.

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 22 points 1 month ago* (last edited 1 month ago) (2 children)

Let me guess, the good news is that conspiracism can be cured but the bad news is that LLMs are able to shape human beliefs. I'll go read now and edit if I was pleasantly incorrect.

Edit: They didn't test the model's ability to inculcate new conspiracies, obviously that'd be a fun day at the office for the ethics review board. But I bet with a malign LLM it's very possible.

[–] [email protected] 19 points 1 month ago (1 children)

A piece of paper dropped on the ground can 'shape human beliefs'. That's literally a tool used in warfare.

The news here is that conspiratorial thinking can be relieved at all.

[–] [email protected] 1 points 1 month ago (1 children)

"AI is just a tool; is a bit naïve. The power of this tool and the scope makes this tool a devastating potential. It's a good idea to be concerned and talk about it.

[–] [email protected] 7 points 1 month ago (1 children)

Agreed - but acting surprised that it can change opinions (for the worse) doesn't make sense to me, that's obvious, since anything can. That AI can potentially do so even more effectively than other things is indeed worth talking about as a society (and is again pretty obvious)

[–] [email protected] 3 points 1 month ago

I wasn't trying to downplay. If it can be wielded thoughtfully at scale, it could be life changing for literally millions.

The risk is that billionaires own these models, and far too often we see their interests aligned with fascism. If they choose to place a motive in this box, they now know it will have a quantifiable effect.

[–] [email protected] 3 points 1 month ago (1 children)

LLMs are able to shape human beliefs.

FUCKING THANK YOU!

I have been trying to get people to understand that the danger of AI isn't some deviantart pngtuber not getting royalties for their Darkererer Sanic OC, but the fact that AI can appear like any other person on the internet, can engage from multiple accounts, and has access to their near entire web history and can make 20 believable scenarios absolutely catered to every weakness in that person's psychology.

I'm glad your post is getting at least some traffic, but even then it's not gonna be enough.

The people that understand the danger have no power to stop it, the people with the power to stop it are profiting off of it and won't stop unless pressured.

And we can't pressure them if we are arguing art rights and shitposting constantly.

[–] [email protected] 3 points 1 month ago (1 children)

We need to make it simpler and connect the dots. Like, what's the worst that could happen when billionaires have exclusive control over a for-profit infinite gaslighting machine? This needs to be spelled out.

[–] [email protected] 1 points 1 month ago

I'm writing a short horror story that will at least illustrate what I see is the problem. That's a form that can be easier to digest