scruiser

joined 2 years ago
[–] [email protected] 8 points 1 day ago

With a name like that and lesswrong to springboard it's popularity, BayesCoin should be good for at least one cycle of pump and dump/rug-pull.

Do some actual programming work (or at least write a "white paper") on tying it into a prediction market on the blockchain and you've got rationalist catnip, they should be all over it, you could do a few cycles of pumping and dumping before the final rug pull.

[–] [email protected] 10 points 1 day ago (6 children)

I feel like some of the doomers are already setting things up to pivot when their most major recent prophecy (AI 2027) fails:

From here:

(My modal timeline has loss of control of Earth mostly happening in 2028, rather than late 2027, but nitpicking at that scale hardly matters.)

It starts with some rationalist jargon to say the author agrees but one year later...

AI 2027 knows this. Their scenario is unrealistically smooth. If they added a couple weird, impactful events, it would be more realistic in its weirdness, but of course it would be simultaneously less realistic in that those particular events are unlikely to occur. This is why the modal narrative, which is more likely than any other particular story, centers around loss of human control the end of 2027, but the median narrative is probably around 2030 or 2031.

Further walking the timeline back, adding qualifiers and exceptions that the authors of AI 2027 somehow didn't explain before. Also, the reason AI 2027 didn't have any mention of Trump blowing up the timeline doing insane shit is because Scott (and maybe some of the other authors, idk) like glazing Trump.

I expect the bottlenecks to pinch harder, and for 4x algorithmic progress to be an overestimate...

No shit, that is what every software engineering blogging about LLMs (even the credulous ones) say, even allowing LLMs get better at raw code writing! Maybe this author is better in touch with reality than most lesswrongers...

...but not by much.

Nope, they still have insane expectations.

Most of my disagreements are quibbles

Then why did you bother writing this? Anyway, I feel like this author has set themselves up to claim credit when it's December 2027 and none of AI 2027's predictions are true. They'll exaggerate their "quibbles" into successful predictions of problems in the AI 2027 timeline, while overlooking the extent to which they agreed.

I'll give this author +10 bayes points for noticing Trump does unpredictable batshit stuff, and -100 for not realizing the real reason why Scott didn't include any call out of that in AI 2027.

[–] [email protected] 5 points 1 day ago

Doom feels really likely to me. […] But who knows, perhaps one of my assumptions is wrong. Perhaps there’s some luck better than humanity deserves. If this happens to be the case, I want to be in a position to make use of it.

This line actually really annoys me, because they are already set up for moving the end date on their doomsday prediction as needed while still maintaining their overall doomerism.

[–] [email protected] 2 points 3 days ago* (last edited 3 days ago)

Oh lol, yeah I forget he originally used lesswrong as a penname for HPMOR (he immediately claimed credit once it actually got popular).

So the problem is lesswrong and Eliezer was previously obscure enough that few academic or educated sources bothered debunking them, but still prolific to get lots of casual readers. Sneerclub makes fun of their shit as it comes up, but effort posting is tiresome, so our effort posts are scattered among more casual mockery. There is one big essay connecting dots written by serious academic (Timnit Gebru and Emile Torres): https://firstmonday.org/ojs/index.php/fm/article/view/13636/11599 . They point out common people between lesswrong, effective altruists, transhumanists, extropians, etc, and explain how the ideologies are related and how they originated.

Also a related irony, Timnit Gebru is interested and has written serious academic papers about algorithmic bias and AI ethics. But for whatever reason (Because she's an actual academic? Because she wrote a paper accurately calling them out? Because of the racists among them who are actually in favor of algorithmic bias?) "AI safety" lesswrong people hate her and are absolutely not interested in working with the AI ethics field of academia. In a world where they were saner and less independent minded cranks, lesswrong and MIRI could tried to get into the field of AI ethics and used that to sanewash and build reputation/respectability for themselves (and maybe even tested their ideas in a field with immediately demonstrable applications instead of wildly speculating about AI systems that aren't remotely close to existing). Instead, they only sort of obliquely imply AI safety is an extension of AI ethics whenever their ideas are discussed in mainstream news sources but don't really maintain the facade if actually pressed on it (I'm not sure how much of it is mainstream reporters trying to sanewash them or deliberate deception on their part).

For a serious but much gentler rebuttal of Effective Altruism, there is this blog: https://reflectivealtruism.com/ . Note this blog was written by an Effective Altruist trying to persuade other EAs of the problem, so they often extend too much credit to EA and lesswrong in an effort to get their points across.

...and I realized you may not have context on the EAs... they are a movement spun off of academic thinking about how to do charity most effectively, and lesswrong was a major early contributor in terms of thinking and members to their movement (they also currently get members from more mainstream recruiting, so it occasionally causes clashes when more mainstream people look around and notice the AI doom-hype and the pseudoscientific racism). So like half EA's work is how to do charity effectively through mosquito nets to countries with malaria problems or paying for nutrition supplements to malnourished children or paying for anti-parasitic drugs to stop... and half their work is funding stuff like "AI safety" research or eugenics think tanks. Oh, and the EA's utilitarian "earn to give" concept was a major inspiration for Sam Bankman Fried trying to make a bunch of money through FTX, so that's another dot connected! (And SBF got a reputation boost from his association with them, and in general their is the issue of billionaire philanthropists reputation laundering and buying influence through philanthropy, so add that to the pile of problems with EA).

Edit: I realized you were actually asking for books about real rationality, not resources deconstructing rationalists... so "Thinking, Fast and Slow" is the book on cognitive biases the Eliezer cribs from. Douglas Hofstadter has a lot of interesting books on philosophical thinking in computer science terms: "Godel, Escher, Bach" and "I am a strange loop". In some ways GEB is dated, but I think that adds context to it that makes it better (in that you can immediately see how the books is flawed so you don't think computer science can replace all other fields). The institute Timnit Gebru is a part of looks like a good source for academic writing on real AI harms: https://www.dair-institute.org/ (but I haven't actually read most of her work yet, just the TESCREAL essay and skimmed a few of her other writings),

[–] [email protected] 15 points 3 days ago (1 children)

No, he's in favor of human slavery, so he still wants to keep naming schemes evocative of it.

[–] [email protected] 7 points 4 days ago* (last edited 4 days ago) (4 children)

Mesa-optimization? I'm not sure who in the lesswrong sphere coined it... but yeah, it's one of their "technical" terms that don't actually have academic publishing behind it, so jargon.

Instrumental convergence.... I think Bostrom coined that one?

The AI alignment forum has a claimed origin here is anyone on the article here from CFAR?

[–] [email protected] 8 points 4 days ago* (last edited 4 days ago)

Center For Applied Rationality. They hosted "workshops" were people could learn to be more rational. Except there methods weren't really tested. And pretty culty. And reaching the correct conclusions (on topics such as AI doom) were treated as proof of rationality.

Edit: still host, present tense. I had misremembered some news of some other rationality adjacent institution as them shutting down, nope, they are still going strong, offering regular 4 day ~~brainwashing sessions~~ workshops.

[–] [email protected] 7 points 4 days ago

Its the sort of stuff that makes great material for science fiction! It's less fun when you see it in the NYT or quoted by mainstream politicians with plans that will wreck the country.

[–] [email protected] 12 points 4 days ago

Yeah the genocidal imagery was downright unhinged, much worse than I expected from what little I've previously read of his. I almost wonder how ideological adjacent allies like Siskind can still stand to be associated with him (but not really, Siskind can normalize any odious insanity if it serves his purposes).

[–] [email protected] 13 points 4 days ago

His fears are my hope, that Trump fucking up hard enough will send the pendulum of public opinion the other way (and then the Democrats use that to push some actually leftist policies through... it's a hope not an actual prediction).

He cultivated this incompetence and worshiped at the altar of the Silicon Valley CEO, so seeing him confronted with Elon's and Trump's clumsy incompetence is some nice schadenfreude.

[–] [email protected] 13 points 4 days ago* (last edited 4 days ago) (1 children)

I can use bad analogies also!

  • If airplanes can fly, why can't they fly to the moon? It is a straightforward extension of existing flight technology, and plotting airplane max altitude from 1900-1920 shows exponential improvement in max altitude. People who are denying moon-plane potential just aren't looking at the hard quantitative numbers in the industry. In fact, with no atmosphere in the way, past a certain threshold airplanes should be able to get higher and higher and faster and faster without anything to slow them down.

I think Eliezer might have started the bad airplane analogies... let me see if I can find a link... and I found an analogy from the same author as the 2027 ~~fanfic~~ forecast: https://www.lesswrong.com/posts/HhWhaSzQr6xmBki8F/birds-brains-planes-and-ai-against-appeals-to-the-complexity

Eliezer used a tortured metaphor about rockets, so I still blame him for the tortured airplane metaphor: https://www.lesswrong.com/posts/Gg9a4y8reWKtLe3Tn/the-rocket-alignment-problem

[–] [email protected] 4 points 4 days ago

So... on strategies for explaining to normies, a personal story often grabs people more than dry facts, so you could focus on the narrative of Eliezer trying big idea, failing or giving up, and moving on to bigger ideas before repeating (stock bot to seed AI to AI programming language to AI safety to shut down all AI)? You'll need the wayback machine, but it is a simple narrative with a clear pattern?

Or you could focus on the narrative arc of someone that previously bought into less wrong? I don't volunteer, but maybe someone else would be willing to take that kind of attention?

I took a stab at both approaches here: https://awful.systems/comment/6885617

 

So despite the nitpicking they did of the Guardian Article, it seems blatantly clear now that Manifest 2024 was infested by racists. The post article doesn't even count Scott Alexander as "racist" (although they do at least note his HBD sympathies) and identify a count of full 8 racists. They mention a talk discussing the Holocaust as a Eugenics event (and added an edit apologizing for their simplistic framing). The post author is painfully careful and apologetic to distinguish what they personally experienced, what was "inaccurate" about the Guardian article, how they are using terminology, etc. Despite the author's caution, the comments are full of the classic SSC strategy of trying to reframe the issue (complaining the post uses the word controversial in the title, complaining about the usage of the term racist, complaining about the threat to their freeze peach and open discourse of ideas by banning racists, etc.).

view more: next ›