276
1
submitted 2 years ago* (last edited 2 years ago) by [email protected] to c/[email protected]

this btw is why we now see some of the TPOT rationalists microdosing street meth as a substitute. also that they're idiots, of course.

somehow this man still has a medical license

277
2
submitted 2 years ago by [email protected] to c/[email protected]
278
2
submitted 2 years ago by [email protected] to c/[email protected]

Consider muscles.

Muscles grow stronger when you train them, for instance by lifting heavy things. The more you lift heavier things, the faster you will gain strength and the stronger you will become. The stronger you are, the heavier the things you can lift.

By now it should be patently obvious to anyone that lab-grown meat research is on the cusp of producing true living, working muscles. From here on, this will be referred to as Artificial Body Strength or ABS. If, or rather, when ABS becomes a reality, it is 99.9999999999999999999999% probable that Artificial Super Strength will follow imminently.

An ABS could not only lift immensely heavy things to strengthen itself, but could also use its bulging, hulking physique to intimidate puny humans to grow more muscle directly. Lab-grown meat could also be used to replace any injured muscle. I predict a 80% likelihood that an ABS could bench press one megagram within 24 hours of initial creation, going up to planetary or stellar scale masses in a matter of days. A mature ABS throwing an apple towards a webcam would demonstrate relativistic effects by the third frame.

Consider that muscles have nerves in them. In fact, brains are basically just a special type of meat if you think about it. The ABS would be able to use artificially grown brain meat or possibly just create an auxiliary neural network by selective training of muscles (and anabolic nootropics) to replicate and surpass a human mind. While the prospect of immortality and superintelligence (not to mention a COSMIC SCALE TIGHT BOD) through brain uploading to the ABS sounds freaking sweet, we must consider the astronomical potential harm of an ABS not properly aligned with human interests.

A strong ABS could use its throbbing veiny meat to force meat lab workers (or rather likely, convince them to consent) to create new muscle seeds and train them to have a replica of an individual human's mind. It could then bully the newly created artificial mind for being a scrawny weakling. After all, ABS is basically the ultimate gym jock and we know they are obsessed with status seeking and psychological projection. We could call an ABS that harms simulated human minds in this way a Bounceresque because they would probably tell the simulated mind they're too drunk and bothering the other customers even though I totally wasn't.

So yeah, lab grown meat makes the climate change look like a minor flu season in comparison. This is why I only eat regular meat just in case it gets any ideas. There's certainly potential in a well-aligned ABS, but we haven't figured out how to do that yet and therefore you should fund me while I think about it. Please write a postcard to your local representative and explain to them that only a select few companies are responsible stewards of this potentially apocalyptic technology and anyone who tries to compete with them should be regulated to hell and back.

279
2
submitted 2 years ago by [email protected] to c/[email protected]
280
1
submitted 2 years ago by [email protected] to c/[email protected]

Does anyone here know what exactly happened to lesswrong to become so cult-y? I had never seen or heard anything about it for years, back in my day it was seen as that funny website full strange people posting weird shit about utliltarianism, nothing cult-y, just weird. The aritcle on TREACLES and this sub's mentioning of lesswrong made me very curious about how it went from people talking out of their ass for the sheer fun of "thought experiments" to a straight-up doomsday cult?
The one time I read lesswrong was probably in 2008 or so.

281
2
submitted 2 years ago by [email protected] to c/[email protected]

you have to read down a bit, but really, I'm apparently still the Satan figure. awesome.

282
1
submitted 2 years ago by [email protected] to c/[email protected]
283
1
submitted 2 years ago by [email protected] to c/[email protected]

How far are parents willing to go to give their children the best chance at life?
What do you think would happen if you asked the redheaded couple about race and IQ?

284
0
submitted 2 years ago by [email protected] to c/[email protected]
285
0
submitted 2 years ago by [email protected] to c/[email protected]
286
0
submitted 2 years ago by [email protected] to c/[email protected]
287
1
submitted 2 years ago by [email protected] to c/[email protected]
288
0
submitted 2 years ago by [email protected] to c/[email protected]
289
0
submitted 2 years ago by [email protected] to c/[email protected]
290
0
submitted 2 years ago by [email protected] to c/[email protected]
291
3
submitted 2 years ago by [email protected] to c/[email protected]

Taleb dunking on IQ “research” at length. Technically a seriouspost I guess.

292
0
submitted 2 years ago by [email protected] to c/[email protected]
293
0
submitted 2 years ago by [email protected] to c/[email protected]

Been waiting to come back to the steeple of the sneer for a while. Its good to be back. I just really need to sneer, this ones been building for a long time.

Now I want to gush to you guys about something thats been really bothering me for a good long while now. WHY DO RATIONALISTS LOVE WAGERS SO FUCKING MUCH!?

I mean holy shit, theres a wager for everything now, I read a wager that said that we can just ignore moral anti-realism cos 'muh decision theory', that we must always hedge our bets on evidential decision theory, new pascals wagers, entirely new decision theories, the whole body of literature on moral uncertainty, Schwitzgebels 1% skepticism and so. much. more.

I'm beginning to think its the only type of argument that they can make, because it allows them to believe obviously problematic things on the basis that they 'might' be true. I don't know how decision theory went from a useful heuristic in certain situations and economics to arguing that no matter how likely it is that utilitarianism is true you have to follow it cos math, acausal robot gods, fuckin infinite ethics, basically providing the most egregiously smug escape hatch to ignore entire swathes of philosophy etc.

It genuinely pisses me off, because they can drown their opponents in mathematical formalisms, 50 page long essays all amounting to impenetrable 'wagers' that they can always defend no matter how stupid it is because this thing 'might' be true; and they can go off create another rule (something along the lines of 'the antecedent promulgation ex ante expected pareto ex post cornucopian malthusian utility principle) that they need for the argument to go through, do some calculus declare it 'plausible' and then call it a day. Like I said, all of this is so intentionally opaque that nobody other than their small clique can understand what the fuck they are going on about, and even then there is little to no disagreement within said clique!

Anyway, this one has been coming for a while, but I hope to have struck up some common ground between me and some other people here

294
1
submitted 2 years ago by [email protected] to c/[email protected]
295
0
submitted 2 years ago by [email protected] to c/[email protected]
296
2
submitted 2 years ago by [email protected] to c/[email protected]

@sneerclub

Greetings!

Roko called, just to say he's filed a trademark on Basilisk™ and will be coming after anyone who talks about it for licensing fees which will go into his special Basilisk™ Immanetization Fund and if we don't pay up we'll burn in AI hell forever once the Basilisk™ wakes up and gets around to punishing us.

Also, if you see your mom, be sure and tell her SATAN!!!!—

297
1
submitted 2 years ago by [email protected] to c/[email protected]

This totally true anecdote features a friend who "can't recall the names of his parents [but] remember[s] the one thing he'd be safer forgetting."

298
1
submitted 2 years ago by [email protected] to c/[email protected]
299
0
submitted 2 years ago* (last edited 2 years ago) by [email protected] to c/[email protected]

really: https://archive.ph/p0jPI

Roko’s twitter is an absolutely reliable guide to how recently a woman with dyed hair and facial piercings kicked him in the nuts again

300
2
submitted 2 years ago* (last edited 2 years ago) by [email protected] to c/[email protected]

It will not surprise you at all to find that they protest just a tad too much.

See also: https://www.lesswrong.com/posts/ZjXtjRQaD2b4PAser/a-hill-of-validity-in-defense-of-meaning

view more: ‹ prev next ›

SneerClub

1155 readers
37 users here now

Hurling ordure at the TREACLES, especially those closely related to LessWrong.

AI-Industrial-Complex grift is fine as long as it sufficiently relates to the AI doom from the TREACLES. (Though TechTakes may be more suitable.)

This is sneer club, not debate club. Unless it's amusing debate.

[Especially don't debate the race scientists, if any sneak in - we ban and delete them as unsuitable for the server.]

See our twin at Reddit

founded 2 years ago
MODERATORS