this post was submitted on 29 Apr 2025
946 points (98.4% liked)

memes

14463 readers
4094 users here now

Community rules

1. Be civilNo trolling, bigotry or other insulting / annoying behaviour

2. No politicsThis is non-politics community. For political memes please go to [email protected]

3. No recent repostsCheck for reposts when posting a meme, you can only repost after 1 month

4. No botsNo bots without the express approval of the mods or the admins

5. No Spam/AdsNo advertisements or spam. This is an instance rule and the only way to live.

A collection of some classic Lemmy memes for your enjoyment

Sister communities

founded 2 years ago
MODERATORS
 
all 49 comments
sorted by: hot top controversial new old
[–] [email protected] 8 points 5 hours ago* (last edited 5 hours ago)

Manipulating users with AI bots to research what, exactly.

Researching what!!!

A/B Testing in Digital Marketing

[–] [email protected] 3 points 5 hours ago

Fuck reddit and its inane smarmy rules

[–] [email protected] 36 points 19 hours ago (1 children)

Err, yeah, I get the meme and it's quite true in its own way...

BUT... This research team REALLY need an ethics committee. A heavy handed one.

[–] [email protected] 17 points 18 hours ago (2 children)

As much as I want to hate the researchers for this, how are you going to ethically test whether you can manipulate people without... manipulating people. And isn't there an argument to be made for harm reduction? I mean, this stuff is already going on. Do we just ignore it or only test it in sanitized environments that won't really apply to the real world?

I dunno, mostly just shooting the shit, but I think there is an argument to be made that this kind of research and it's results are more valuable than the potential harm. Tho the way this particular research team went about it, including changing the study fundamentally without further approval, does pose problems.

[–] [email protected] 16 points 17 hours ago

how are you going to ethically test whether you can manipulate people without… manipulating people.

That's a great question. In the US, researchers are generally obliged (by their universities or their funders) to use an Institutional Review Board to review any proposed experiment involving human subjects. The IRB look for things like: causing physical or emotional harm to the subjects, taking advantage of vulnerable populations, using deception without consent, etc. The IRB might let you do something like manipulate a subject, if the subjects were informed that they might be manipulated or deceived. Yes, this might introduce an observer effect, but this type of review is generally accepted as being necessary for doing ethical research. However, I'm not familiar with the research in question or with the requirements of the Univ of Zurich where the researchers are from.

[–] [email protected] 4 points 17 hours ago (1 children)

from what I remember from my early psych class, manipulation can be used, but should be used carefully in an experiment.

there’s a lot that goes into designing a research experiment that tests or requires the use of manipulation, as appropriate approvals and ethics reviews are needed.

and usually it should be done in a “controlled” environment where there’s some manner of consent and compensation.

I have not read the details done here but the research does not seem to happen in a controlled env, participants had no way to express consent to opt in or opt out, and afaik they were not compensated.

any psych or social sci peeps, feel free to jump in to correct me if I say something wrong.

on a side note, another thing that this meme suggests is that both of these situations are somehow equal. IMO, they are not. researchers and academics should be expected to uphold code of ethics more so than corporations.

[–] [email protected] 7 points 17 hours ago* (last edited 17 hours ago) (1 children)

Tutoring psych right now - another big thing is the debrief.

It needs to be something you can’t do without lying, something important enough to be worth lying about, and you must debrief the participants at the end. I really doubt the researchers went back and messaged every single person that interacted with them revealing the lie.

[–] [email protected] 2 points 13 hours ago (1 children)

I'm planning a long term study on gaslighting myself.

[–] [email protected] 2 points 7 hours ago (1 children)

Going to be difficult to double blind that…

[–] [email protected] 167 points 1 day ago* (last edited 21 hours ago) (3 children)

To be fair, though, this experiment was stupid as all fuck. It was run on /r/changemyview to see if users would recognize that the comments were created by bots. The study's authors conclude that the users didn't recognize this. [EDIT: To clarify, the study was seeing if it could persuade the OP, but they did this in a subreddit where you aren't allowed to call out AI. If an LLM bot gets called out as such, its persuasiveness inherently falls off a cliff.]

Except, you know, Rule 3 of commenting in that subreddit is: "Refrain from accusing OP or anyone else of being unwilling to change their view, of using ChatGPT or other AI to generate text, [emphasis not even mine] or of arguing in bad faith."

It's like creating a poll to find out if women in Afghanistan are okay with having their rights taken away but making sure participants have to fill it out under the supervision of Hibatullah Akhundzada. "Obviously these are all brainwashed sheep who love the regime", happily concludes the dumbest pollster in history.

[–] [email protected] 4 points 4 hours ago

It’s like creating a poll to find out if women in Afghanistan are okay with having their rights taken away but making sure participants have to fill it out under the supervision of Hibatullah Akhundzada. “Obviously these are all brainwashed sheep who love the regime”, happily concludes the dumbest pollster in history.

I don't particularly like this analogy, because /r/changemyview isn't operating in a country where an occupying army was bombing weddings a few years earlier.

But this goes back to the problem at hand. People have their priors (my bots are so sick nasty that nobody can detect them / my liberal government was so woke and cool that nobody could possibly fail to love it) and then build their biases up around them like armor (any coordinated effort to expose my bots is cheating! / anyone who prefers the new government must be brainwashed!)

And the Bayesian Reasoning model fixates on the notion that there are only ever a discrete predefined series of choices and uniform biases that the participant must navigate within. No real room for nuance or relativism.

[–] [email protected] 59 points 1 day ago (1 children)

Wow. That's really fucking stupid.

[–] [email protected] 3 points 23 hours ago (1 children)

Rule 3 of commenting

Reddit has great rules…

[–] [email protected] 4 points 21 hours ago (1 children)

Can you explain your complaint a bit more? I'm trying to figure out just what you mean with your comment, but all I can see out of it is "reddit sucks". Which... yeah, but in this instance why?

[–] [email protected] 2 points 13 hours ago (1 children)

They overpolice opinion that could be true. Can't even squeak out a fart without someone banning you.

[–] [email protected] 1 points 7 hours ago (1 children)

Who is "they" in this situation?

[–] [email protected] 1 points 4 hours ago (1 children)

Reddit, moderators, the community: you name it.

[–] [email protected] 1 points 3 hours ago (1 children)

So nothing specific at all just, kind of lashing out for the fuck of it by latching on to some random nonsense.

Biggest advice anyone can give you right now: let it go. Reddit can't hurt you anymore. Constantly obsessing about it, no matter how much it does suck, will only drag you down a spiral of contempt and despair.

Most of all, dont drag others down that spiral either. We've got much bigger issues than some reddit mods or whatever abusing power.

[–] [email protected] 1 points 2 hours ago* (last edited 2 hours ago)

The post title is know the Reddit rules, so ridiculing them is apropos to the topic.

This place could easily fall into the same trap if people ease up. 🤷

[–] [email protected] 67 points 1 day ago (1 children)

Deleted by moderator because you upvoted a Luigi meme a decade ago

...don't mind me, just trying to make the reddit experience complete for you...

[–] [email protected] 24 points 23 hours ago (4 children)

that's funny.

I had several of my Luigi posts and comments removed -- on Lemmy. let's see if it still holds true.

1000000933

1000000908

1000000929

[–] [email protected] 30 points 22 hours ago (1 children)

.world is known (largely due to the Luigi Mangione stuff) to have moderation that's a bit more heavy handed and more similar to the sort of "corporate Internet".

No real hate for them and they've indicated in the past that some of their actions are just to comply with their local laws. But if you're looking for an older internet experience you'll wanna move to a different instance.

[–] [email protected] 17 points 21 hours ago* (last edited 21 hours ago)

That's why I left .world in December. I get why they did it, but it just showed I don't want to be in the most popular instance since they're always going to be the first one targeted and are more censorship happy as a result.

[–] [email protected] 21 points 22 hours ago

Well then, as lemmy's self-designated High Corvid of Progressivity, I extend to you the traditional Fediversal blessing of:

remember kids:

A place in heaven is reserved for those who speak truth to power

[–] [email protected] 9 points 23 hours ago (1 children)

Lemmy is a collection of different instances with different administrators, moderators, and rules.

[–] [email protected] -1 points 23 hours ago (3 children)

this was Lemmy.world that did it.

last I knew anything that had the word "Luigi" in the meme was blocked.

[–] [email protected] 6 points 19 hours ago* (last edited 19 hours ago)

I have no life and am practically a fixture of Lemmy and I see more talk about Lemmy.World being toxic and ban happy than I have actually seen Lemmy.World being toxic and ban happy. Especially around shit about Luigi Mangione. Another common complaint is that CSAM is often posted and left up for hours/days. Which is complete and utter bullshit.

[–] [email protected] 9 points 22 hours ago

Then move your ass over to a different instance. That's the entire point of lemmy

[–] [email protected] 5 points 22 hours ago (1 children)

Last I heard Lemmy.ml and Lemmy.world are the most toxic, Reddit-like instances, so it might be perfectly in lone with their usual way of ruling

[–] [email protected] -3 points 16 hours ago

world may be reddit like and toxic, and this may be due to its high number of users.

However lemmy.ml is nothing like reddit. Nor is it toxic, unless you diss communism.

[–] [email protected] 3 points 20 hours ago

That's because your username is wrong. Your username is [email protected], but it should be [email protected]. That would fix your problem.

[–] [email protected] 8 points 20 hours ago

So they banned the people that successfully registered a bunch of AI bots and had them fly under the mods radar. I'm sure they're devastated and will never be able to get on the site again...

[–] [email protected] 24 points 1 day ago (2 children)

That story is crazy and very believable. I 100% believe that AI bots are out there astroturfing opinions on reddit and elsewhere.

I'm unsure if that's better or worse than real people doing it, as has been the case for a while.

[–] [email protected] 7 points 23 hours ago (1 children)

Belief doesn't even have to factor; it's a plain-as-day truth. The sooner we collectively accept this fact, the sooner we change this shit for the better. Get on board, citizen. It's better over here.

[–] [email protected] 8 points 23 hours ago (1 children)

I worry that it's only better here right now because we're small and not a target. The worst we seem to get are the occasional spam bots. How are we realistically going to identify LLMs that have been trained on reddit data?

[–] [email protected] 4 points 23 hours ago* (last edited 23 hours ago)

Honestly? I'm no expert and have no actionable ideas in that direction, but I certainly hope we're able to work together as a species to overcome the unchecked greed of a few parasites at the top. ~#LuigiDidNothingWrong~

[–] [email protected] 4 points 23 hours ago

What is likely happening is that bots are manipulating bots

[–] [email protected] 26 points 1 day ago (1 children)

You dare suggest that corporations are anything but our nearest and dearest friends? They'd never sell us out. Never!

[–] [email protected] 5 points 22 hours ago

It's very possible, almost entirely a reality, that corporations can simultaneously be our enemy, and the enemy of our enemy.

But they're never our friend.

[–] [email protected] 17 points 1 day ago (1 children)

$0.50 says that the "reveal" was part of the study protocol. I.e. "how people react to being knowingly vs. unknowingly manipulated".

[–] [email protected] 6 points 1 day ago

Seems dangerous, it's a breach of the ToS I assume so they're opening up to possible liability if Reddit got pissy. I'm actually surprised this kind of research gets IRB and other approval given you're violating ToS unless given a variance from it (I used to conduct research on social networks and had to get preapproved accounts for the purpose, and the data I was given was carefully limited.)

[–] [email protected] 4 points 23 hours ago

After all, it's all about con$$ent, eh?

[–] [email protected] 2 points 22 hours ago

Insert same picture meme