this post was submitted on 28 Sep 2023
2 points (100.0% liked)

SneerClub

1012 readers
3 users here now

Hurling ordure at the TREACLES, especially those closely related to LessWrong.

AI-Industrial-Complex grift is fine as long as it sufficiently relates to the AI doom from the TREACLES. (Though TechTakes may be more suitable.)

This is sneer club, not debate club. Unless it's amusing debate.

[Especially don't debate the race scientists, if any sneak in - we ban and delete them as unsuitable for the server.]

founded 2 years ago
MODERATORS
 

The local Effective Altruism chapter had a stand at the university hobby fair.

Last time I read their charity guide spam email for student clubs, they were still mostly into the relatively benign end of EA stuff, listing some charities they had deemed most effective by some methodology. My curiosity got the best of me and I went to talk to them. I wanted to find out if they'd started pushing seedier stuff and whether the people at the stand were aware of the dark side of TESCREAL.

They seemed to have gotten into AI risk stuff, which was not surprising. Also, they seemed to be unaware of most of the incidents and critics I referred to, mostly only knowing about the FTX debacle.

They invited me to attend their AI risk discussion event, saying (as TREACLES adjacents always do) that they love hearing criticism and different points of view and so on.

On one hand, EA is not super big here and most of their members and prospectively interested participants are probably not that invested in the movement yet. This could be an opportunity to spread awareness of the dark side of EA and its adjacent movements and maybe prevent some people from falling for the cult stuff.

On the other hand, acting as the spokesman for the opposing case is a big responsibility and the preparation is a lot of work. I'm slightly worried that pushing back at the event might escalate into a public debate or even worse, some kind of Ben Shapiro style affair where I'm DESTROYED with FACTS and LOGIC by some guy with a microphone and a primed audience. Also, dealing with these people is usually just plain exhausting.

So, I'm feeling conflicted and would like some advice from the best possible source: random people on the internet. Do y'all think it's a good idea to go? Do you think it's a terrible idea?

top 3 comments
sorted by: hot top controversial new old
[–] [email protected] 1 points 1 year ago

My impression is that the toxicity within EA is mainly concentrated in the bay area rationalists, and in a few of the actual EA organizations. If it's just a local meetup group, it's probably just going to be some regular-ish people with some mistaken beliefs that are genuinely concerned about AI.

Just be polite and present arguments, and you might actually change minds, at least among those who haven't been sucked too far into Rationalism.

[–] [email protected] 1 points 1 year ago* (last edited 1 year ago)

Like any IRL situation it’s probably more pertinent to read the room and be present, rather than theorycraft about what might happen.

That being said, my gut says this: there are going to be a large share of TREACLES peoples in the crowd. Here’s my argument.

  • The crowd will be mostly EA people.
  • Anyone in EA willing to go to an EA hosted talk about AI X risk is probably beyond the eye deworming charity phase of EA.

This isn’t the scientology personality test phase of EA, it’s the private seminar phase before they teach you xenu phase, except their proselytisers aren’t nearly as charismatic/well trained in conversion.

I think the most viable targets for any detreacling would be any friends or tagger-ons to the event. Sorta like how if you are arguing on the internet, you don’t really hope to change the other party’s mind; you’re more hoping to sway anyone who comes along and reads the thread.

I’d go, personally, if only for the spectacle.

[–] [email protected] 1 points 1 year ago

You don’t control the audience and you can’t predict what you’ll be asked about or engaged upon, so you can’t necessarily predict or prepare a full spectrum.

What you can do is to make an ahead of time decision on what you feel you can and cannot cover, and what response you’ll use if something that falls in the latter category comes up. You could defer engagement, refer them to different sources, explicitly state you’re not ready to engage on that subsection, etc. It’s not necessarily as full coverage as might be good otherwise, but you can choose not to step into traps.

And if someone continues down such an avenue despite you saying you choose not to, you just call them on it and shut it down.

Know when you can engage, know when you can walk away.