this post was submitted on 08 Sep 2023
2 points (100.0% liked)
SneerClub
983 readers
6 users here now
Hurling ordure at the TREACLES, especially those closely related to LessWrong.
AI-Industrial-Complex grift is fine as long as it sufficiently relates to the AI doom from the TREACLES. (Though TechTakes may be more suitable.)
This is sneer club, not debate club. Unless it's amusing debate.
[Especially don't debate the race scientists, if any sneak in - we ban and delete them as unsuitable for the server.]
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Less Wrong introduced me to a lot of interesting ideas, like how you can apply Bayesian reasoning to beliefs and making your beliefs "pay rent", but I'm not in love with the Sam Bankman-Fried of it all.
It’s not bayesian reasoning without actual math, and many beliefs are not so easily quantified under any statistical framework.
All it really offers is unwarranted confidence in one’s own rationality, often used in these circles to cloak nauseating positions.
Making ideas « pay rent », In these circles is also used for black-pilling people into rejecting common sense and humanity. It’s good to be skeptical of new ideas or new claims, it’s even good to analyze and synthesize your own beliefs, it’s a bit dangerous to say that every belief is negotiable, and to give the tools to others the tools to mould them (here be cult dragons).
Despite it’s flaws, it is a good opening to the Declaration of Independence of the US (not American myself): « we hold these truths to be self-evident, that all [people] are […] equal […] with certain unalienable rights »
You can’t get morals from stats, and some core ideals ought to live in your mind rent-free.
I haven't talked to these people on any regular basis, other than once attending one of their weird little seminars at someone's house once, so I've had little to no experience with how they apply these ideas. I just read the blog posts and mulled it over on my own for a while, and my main takeaways were something like "change your mind incrementally when you get new evidence".
That said everything you're saying does ring true, and I've been changing my mind about Yudkowsky and his ilk pretty gradually for a number of years. Hearing that he's in with dudes like SBF have made me ready to fully distance myself from their stuff now that I know what they get up to.
I never heard of applying the ad hoc Bayesian thing to moral stances. I'd only ever applied it to questions of fact. Creepy to think where that leads.
Thanks for chatting with me about this, it's been helpful.