[-] scruiser@awful.systems 1 points 8 hours ago

Someone just made a top post condemning the Molotov but defending and normalizing Eliezer: https://www.lesswrong.com/posts/Sih2sFHEgusDEuxtZ/you-can-t-trust-violence

[-] scruiser@awful.systems 7 points 8 hours ago* (last edited 8 hours ago)

A rationalist made a top post where they (poorly) argue against political "violence" (scare quotes because they lump in property damage): https://www.lesswrong.com/posts/Sih2sFHEgusDEuxtZ/you-can-t-trust-violence

Highlights include a shallow half-assed defense of dear leader Eliezer's calls for violence:

True, Eliezer Yudkowsky’s TIME article called on the state to use violence to enforce AI policies required to prevent AI from destroying humanity. But it’s hard to think of a more legitimate use of violence than the government preventing the deaths of everyone alive.

Eliezer called for drone strikes against data centers even if it would start a nuclear war and even against countries that aren't signatories to whatever hypothetical international agreement against AI there is. That is extremely irregular by the standards of international law and diplomacy, and this lesswronger just elides over those little details

Violence is not a realistic way to stop AI.

(Except for drone strikes and starting a nuclear war.)

They treat a Molotov thrown at Sam Altman's house as if it were thrown directly at Sam himself:

as critics blamed the AI Safety community for the attacker who threw a Molotov cocktail at Sam Altman

This is a pretty blatant misrepresentation of the action which makes it sound much more violent.

They continue on with minimizing right-wing violence:

Even if there are occasional acts of political violence like the murders of Democratic Minnesota legislators or Conservative pundit Charlie Kirk, we don’t generally view them as indicting entire movements, but as the acts of deranged individuals.

Actually, outside of right-wing bubbles (and right-wing sources masking themselves as centrist), lots of people actually do blame Trump and the leaders of entire right wing movement as at fault for a lot of recent political violence. Of course, this is lesswrong, which has a pretty cooked Overton window, so it figures the lesswronger would be wrong about this.

Following that, the lesswronger acknowledges it is kind of questionable and a conflation of terms to label property damage violence, but then press right on ahead with some pretty weak arguments that don't acknowledge why some people want to make the distinction.

So in conclusion:

  • drone strikes that start nuclear wars: legitimate violence that is totally logical and reasonable
  • throw a single incendiary at someone's home that doesn't hurt anybody or even light the home on fire: illegitimate violence that must be absolutely condemned without exception
  • (bonus) recent right-wing violence: lone deranged individuals and not the fault of Trump or anyone like that. Everyone is saying it.
[-] scruiser@awful.systems 4 points 23 hours ago* (last edited 22 hours ago)

Lesswrong is too centrist-brained to ever even hint at legitimizing (non-state-sanctioned) destruction of property as a means of protest or political action. But according to the orthodox lesswrong lore, Sam Altman's actions are literally an existential threat to all humanity, so they can't defend him either. So they are left with silence.

I actually kind of agree with the anarchy-libertarian's response? It is massively down voted.

This is just elevating your aesthetic preference for what the violence you're advocating for looks like to a moral principle. The claim that throwing a Molotov cocktail at one guy's house is counterproductive to the goal of "bombing the datacenters" is a better argument, though one I do not believe.

Bingo. Dear leader Yudkowsky can ask to bomb the data centers, and as long as this action goes through the US political process, that violence is legitimate, regardless of how ill-behaved the US is or it's political processes degraded from actually functioning as a democracy.

[-] scruiser@awful.systems 8 points 1 day ago* (last edited 1 day ago)

garner sympathy

He posted a response that criticizes the recent news article about him (of all the times he has acted like a sociopathic liar), and HN was eating it up, so even if this isn't an intentional false flag he is still playing it that way pretty effectively.

[-] scruiser@awful.systems 4 points 1 day ago

The collapse of the current American management of global supply chains isn't exactly an optimistic expectation, but I guess it beats social media continuing as it is into the future and maybe a better global order will develop in the aftermath.

[-] scruiser@awful.systems 7 points 1 day ago

No77e is correctly noting the discrepancy between the rationalist obsession with eugenics and the belief in an imminent (or even the next 40 years) technological singularity, but fails to realize that the general problem is the eugenics obsession of rationalists. It is kind of frustrating how close but far they are from realizing the problem.

Also, reminder of the time Eliezer claimed Genesmith's insane genetic engineering plan was one of the most important projects in the world (after AI obviously): https://www.lesswrong.com/posts/DfrSZaf3JC8vJdbZL/how-to-make-superbabies?commentId=fxnhSv3n4aRjPQDwQ Apparently Eliezer's plan if we aren't all doomed by LLMs is to let the genetically engineered geniuses invent friendly AI instead.

[-] scruiser@awful.systems 3 points 1 day ago* (last edited 1 day ago)

It's a good blog series.

But just to point it out... note the author still buys the AI hype too much. This post is criticizing Microsoft for missing out because OpenAI made that $300 billion deal with Oracle (with the assumption that Microsoft could have a similar amount of revenue from OpenAI instead). Except neither OpenAI nor Oracle has the money or means to carry out that deal, Oracle is struggling to raise the capital to fulfill their end and an analysis of time to bring data centers online suggest they can't meet their target goals even with the money, and OpenAI doesn't have the money to pay for their end, the revenue just isn't coming in unless they somehow become more ubiquitous and lucrative than the entire market for, for example, all streaming services put together (thanks to Ed Zitron for that fun comparison).

[-] scruiser@awful.systems 4 points 1 day ago* (last edited 1 day ago)

I had hoped that with the whole “agent” push that we would start seeing more sane usage, like having AI be a fuzzy logic step in a chain of formal logic and existing deterministic tools

I think this is the best you can expect out of LLMs, and the relatively more successful "agentic" AI efforts are probably doing exactly this, but their relative success is serving as hype fuel for the more impossible promises of LLMs. Also, if you have formal logic and deterministic tools wrapping and sanity checking the LLM bits... I think the value add of evaporating rivers and firing up jet turbines to train and serve "cutting edge" models that only screw up 1% of the time isn't there because you can run a open weight model 1/100th the size that screws up 10% of the time instead. (Note one important detail: training costs go up quadratically with model size, so a 100x size model is 10,000x training compute.) I think the frontier LLM companies should have pivoted to prioritizing smaller size, greater efficiency, and actually sustainable business practices 4 years ago. At the very latest, 2 years ago, with the release of 4o OpenAI should have realized pushing up model size was the wrong direction (as they should have realized training Chain-of-Thought was not going to be the magic bullet).

And to be clear I still think this is really generous to the use case of smaller LMs.

[-] scruiser@awful.systems 3 points 1 day ago

On a more productive note, this feels likely to be tied in with the usual issues of AI sycophancy re: false positive rate.

I suspect this is the real limit. Claude Mythos might find real vulnerabilities, but if they are buried among loads of false positives it won't be that useful to black or white hat hackers and the endless tide of slop PRs and bug reports will keep coming.

I tried looking through Anthropic's "preview" for a description of the false positive rate... they sort of beat around the bush as to how many false positives they had to sort out to find the real vulnerabilities they reported (even obliquely addressing the issue was better than I expected but still well short of the standard for a good industry-standard security report from what I understand).

They've got one class of bugs they can apparently verify efficiently?

Memory safety violations are particularly easy to verify. Tools like Address Sanitizer perfectly separate real bugs from hallucinations; as a result, when we tested Opus 4.6 and sent Firefox 112 bugs, every single one was confirmed to be a true positive.

It's not clear from their preview if Claude was able to automatically use Address Sanitizer or not? Also not clear to me (I've programmed with Python for the past ten years and haven't touched C since my undergraduate days), maybe someone could explain, how likely is it that these bugs are actually exploitable and/or show up for users?

Moving on...

This process means that we don’t flood maintainers with an unmanageable amount of new work—but the length of this process also means that fewer than 1% of the potential vulnerabilities we’ve discovered so far have been fully patched by their maintainers.

So its good they aren't just flooding maintainers with slop (and it means if they do publicly release mythos maintainers will get flooded with slop bug fixes), but... this makes me expect they have a really high false positive rate (especially if you rule minor code issues that don't actually cause bugs or vulnerabilities as false positives).

[-] scruiser@awful.systems 8 points 1 day ago

I've read speculation that in 30-50 years people will have an attitude towards social media that we have towards cigarettes now.

That would be really nice but that scenario feels pretty optimistic to me on a few points. For one, scientists doing research were able to overcome the lobbying influence and paid think tanks of cigarette companies. I am worried science as a public institution isn't in good enough shape to do that nowadays. Likewise part of the push back against cigarettes included a variety of mandatory labeling and sin taxes on them, and it would take some pretty major shifts for the political will for that kind of action to be viable. Well maybe these things are viable in the EU, the US is pretty screwed.

[-] scruiser@awful.systems 6 points 1 day ago

Old Twitter was terrible for people’s souls.

It almost makes me feel sorry for the way the rationalists are still so attached to it. But they literally have two different forums (lesswrong and the EA forum), so staying on twitter is entirely their choice, they have alternatives.

Fun fact! Over the past few years, Eliezer has deliberately cut his lesswrong posting in favor of posting on twitter, apparently (he's made a few comments about this choice) because lesswrong doesn't uncritically accept his ideas and nitpicks them more than twitter does. (How bad do you have to be to not even listen to critique on a website that basically loves you and take your controversial foundational premises seriously?)

[-] scruiser@awful.systems 12 points 1 day ago

Rationalist Infighting!

tldr; one of the MIRI aligned rationalist (Rob Bensinger) complained about how EA actually increased AI-risk long-run by promoting OpenAI and then Anthropic. Scott Alexander responded aggressively, basically saying they are entirely wrong and also they are bad at public communications! Various lesswrongers weigh in, seemingly blind to irony and hypocrisy!

Some highlights from the quotes of the original tweets and the lesswronger comments on them:

  • Scott Alexander tries blaming Eliezer for hyping up AI and thus contributing to OpenAI in the first place. Just a reminder, Scott is one of the AI 2027 authors, he really doesn't have room to complain about rationalist creating crit-hype.

  • Scott Alexander tries claiming SBF was a unique one off in the rationalist/EA community! (Anthropic's leadership has been called out on the EA forums and lesswrong for a similar pattern of repeated lying)

  • Rob Bensinger is indirectly trying to claim Eliezer/MIRI has been serious forthright honest commentators on AI theory and policy, as opposed to Open-Phil/EA/Anthropic which have been "strategic" with their public communication, to the point of dishonesty.

  • habryka is apparently on the verge of crashing out? I can't tell if they are planning on just quitting twitter or quitting their attempts at leadership within the rationalist community. Quitting twitter is probably a good call no matter what.

  • Load of tediously long posts, mired with that long-winded rationalist way of talking, full of rationalist in-group jargon for conversations and conflict resolution

  • Disagreement on whether Ilya Sutskever's $50 billion dollar startup is going to contribute to AI safety or just continue the race to AGI.

  • Arguments over who is with the EAs vs. Open Philanthropy vs. MIRI!

  • Argument over the definition of gaslighting!

To be clear, I agree with the complaints about EA and Anthropic, I just also think MIRI has its own similar set of problems. So they are both right, all of the rationalists are terrible at pursing their alleged nominal goals of stopping AI Doom.

I did sympathize with one lesswronger's comment:

More than any other group I've been a part of, rationalists love to develop extremely long and complicated social grievances with each other, taking pages and pages of text to articulate. Maybe I'm just too stupid to understand the high level strategic nuances of what's going on -- what are these people even arguing about? The exact flavor of comms presented over the last ten years?

22

So seeing the reaction on lesswrong to Eliezer's book has been interesting. It turns out, even among people that already mostly agree with him, a lot of them were hoping he would make their case better than he has (either because they aren't as convinced as him, or they are, but were hoping for something more palatable to the general public).

This review (lesswrong discussion here), calls out a really obvious issue: Eliezer's AI doom story was formed before Deep Learning took off, and in fact was mostly focusing on more GOFAI than neural networks, yet somehow, the details of the story haven't changed at all. The reviewer is a rationalist that still believes in AI doom, so I wouldn't give her too much credit, but she does note this is a major discrepancy from someone that espouses a philosophy that (nominally) features a lot of updating your beliefs in response to evidence. The reviewer also notes that "it should be illegal to own more than eight of the most powerful GPUs available in 2024 without international monitoring" is kind of unworkable.

This reviewer liked the book more than they expected to, because Eliezer and Nate Soares gets some details of the AI doom lore closer to the reviewer's current favored headcanon. The reviewer does complain that maybe weird and condescending parables aren't the best outreach strategy!

This reviewer has written their own AI doom explainer which they think is better! From their limited description, I kind of agree, because it sounds like the focus on current real world scenarios and harms (and extrapolate them to doom). But again, I wouldn't give them too much credit, it sounds like they don't understand why existential doom is actually promoted (as a distraction and source of crit-hype). They also note the 8 GPUs thing is batshit.

Overall, it sounds like lesswrongers view the book as an improvement to the sprawling mess of arguments in the sequences (and scattered across other places like Arbital), but still not as well structured as they could be or stylistically quite right for a normy audience (i.e. the condescending parables and diversions into unrelated science-y topics). And some are worried that Nate and Eliezer's focus on an unworkable strategy (shut it all down, 8 GPU max!) with no intermediate steps or goals or options might not be the best.

20
submitted 11 months ago* (last edited 11 months ago) by scruiser@awful.systems to c/sneerclub@awful.systems

I found a neat essay discussing the history of Doug Lenat, Eurisko, and cyc here. The essay is pretty cool, Doug Lenat made one of the largest and most systematic efforts to make Good Old Fashioned Symbolic AI reach AGI through sheer volume and detail of expert system entries. It didn't work (obviously), but what's interesting (especially in contrast to LLMs), is that Doug made his business, Cycorp actually profitable and actually produce useful products in the form of custom built expert systems to various customers over the decades with a steady level of employees and effort spent (as opposed to LLM companies sucking up massive VC capital to generate crappy products that will probably go bust).

This sparked memories of lesswrong discussion of Eurisko... which leads to some choice sneerable classic lines.

In a sequence classic, Eliezer discusses Eurisko. Having read an essay explaining Eurisko more clearly, a lot of Eliezer's discussion seems a lot emptier now.

To the best of my inexhaustive knowledge, EURISKO may still be the most sophisticated self-improving AI ever built - in the 1980s, by Douglas Lenat before he started wasting his life on Cyc. EURISKO was applied in domains ranging from the Traveller war game (EURISKO became champion without having ever before fought a human) to VLSI circuit design.

This line is classic Eliezer dunning-kruger arrogance. The lesson from Cyc were used in useful expert systems and effort building the expert systems was used to continue to advance Cyc, so I would call Doug really successful actually, much more successful than many AGI efforts (including Eliezer's). And it didn't depend on endless VC funding or hype cycles.

EURISKO used "heuristics" to, for example, design potential space fleets. It also had heuristics for suggesting new heuristics, and metaheuristics could apply to any heuristic, including metaheuristics. E.g. EURISKO started with the heuristic "investigate extreme cases" but moved on to "investigate cases close to extremes". The heuristics were written in RLL, which stands for Representation Language Language. According to Lenat, it was figuring out how to represent the heuristics in such fashion that they could usefully modify themselves without always just breaking, that consumed most of the conceptual effort in creating EURISKO.

...

EURISKO lacked what I called "insight" - that is, the type of abstract knowledge that lets humans fly through the search space. And so its recursive access to its own heuristics proved to be for nought. Unless, y'know, you're counting becoming world champion at Traveller without ever previously playing a human, as some sort of accomplishment.

Eliezer simultaneously mocks Doug's big achievements but exaggerates this one. The detailed essay I linked at the beginning actually explains this properly. Traveller's rules inadvertently encouraged a narrow degenerate (in the mathematical sense) strategy. The second place person actually found the same broken strategy Doug (using Eurisko) did, Doug just did it slightly better because he had gamed it out more and included a few ship designs that countered the opponent doing the same broken strategy. It was a nice feat of a human leveraging a computer to mathematically explore a game, it wasn't an AI independently exploring a game.

Another lesswronger brings up Eurisko here. Eliezer is of course worried:

This is a road that does not lead to Friendly AI, only to AGI. I doubt this has anything to do with Lenat's motives - but I'm glad the source code isn't published and I don't think you'd be doing a service to the human species by trying to reimplement it.

And yes, Eliezer actually is worried a 1970s dead end in AI might lead to FOOM and AGI doom. To a comment here:

Are you really afraid that AI is so easy that it's a very short distance between "ooh, cool" and "oh, shit"?

Eliezer responds:

Depends how cool. I don't know the space of self-modifying programs very well. Anything cooler than anything that's been tried before, even marginally cooler, has a noticeable subjective probability of going to shit. I mean, if you kept on making it marginally cooler and cooler, it'd go to "oh, shit" one day after a sequence of "ooh, cools" and I don't know how long that sequence is.

Fearmongering back in 2008 even before he had given up and gone full doomer.

And this reminds me, Eliezer did not actually predict which paths lead to better AI. In 2008 he was pretty convinced Neural Networks were not a path to AGI.

Not to mention that neural networks have also been "failing" (i.e., not yet succeeding) to produce real AI for 30 years now. I don't think this particular raw fact licenses any conclusions in particular. But at least don't tell me it's still the new revolutionary idea in AI.

Apparently it took all the way until AlphaGo (sometime 2015 to 2017) for Eliezer to start to realize he was wrong. (He never made a major post about changing his mind, I had to reconstruct this process and estimate this date from other lesswronger's discussing it and noticing small comments from him here and there.) Of course, even as late as 2017, MIRI was still neglecting neural networks to focus on abstract frameworks like "Highly Reliable Agent Design".

So yeah. Puts things into context, doesn't it.

Bonus: One of Doug's last papers, which lists out a lot of lessons LLMs could take from cyc and expert systems. You might recognize the co-author, Gary Marcus, from one of the LLM critical blogs: https://garymarcus.substack.com/

19
submitted 11 months ago* (last edited 11 months ago) by scruiser@awful.systems to c/sneerclub@awful.systems

So, lesswrong Yudkowskian orthodoxy is that any AGI without "alignment" will bootstrap to omnipotence, destroy all mankind, blah, blah, etc. However, there has been the large splinter heresy of accelerationists that want AGI as soon as possible and aren't worried about this at all (we still make fun of them because what they want would result in some cyberpunk dystopian shit in the process of trying to reach it). However, even the accelerationist don't want Chinese AGI, because insert standard sinophobic rhetoric about how they hate freedom and democracy or have world conquering ambitions or they simply lack the creativity, technical ability, or background knowledge (i.e. lesswrong screeds on alignment) to create an aligned AGI.

This is a long running trend in lesswrong writing I've recently noticed while hate-binging and catching up on the sneering I've missed (I had paid less attention to lesswrong over the past year up until Trump started making techno-fascist moves), so I've selected some illustrative posts and quotes for your sneering.

  • Good news, China actually has no chance at competing at AI (this was posted before deepseek was released). Well. they are technically right that China doesn't have the resources to compete in scaling LLMs to AGI because it isn't possible in the first place

China has neither the resources nor any interest in competing with the US in developing artificial general intelligence (AGI) primarily via scaling Large Language Models (LLMs).

  • The Situational Awareness Essays make sure to get their Yellow Peril fearmongering on! Because clearly China is the threat to freedom and the authoritarian power (pay no attention to the techbro techno-fascist)

In the race to AGI, the free world’s very survival will be at stake. Can we maintain our preeminence over the authoritarian powers?

  • More crap from the same author
  • There are some posts pushing back on having an AGI race with China, but not because they are correcting the sinophobia or the delusions LLMs are a path to AGI, but because it will potentially lead to an unaligned or improperly aligned AGI
  • And of course, AI 2027 features a race with China that either the US can win with a AGI slowdown (and an evil AGI puppeting China) or both lose to the AGI menance. Featuring "legions of CCP spies"

Given the “dangers” of the new model, OpenBrain “responsibly” elects not to release it publicly yet (in fact, they want to focus on internal AI R&D). Knowledge of Agent-2’s full capabilities is limited to an elite silo containing the immediate team, OpenBrain leadership and security, a few dozen US government officials, and the legions of CCP spies who have infiltrated OpenBrain for years.

  • Someone asks the question directly Why Should I Assume CCP AGI is Worse Than USG AGI?. Judging by upvoted comments, lesswrong orthodoxy of all AGI leads to doom is the most common opinion, and a few comments even point out the hypocrisy of promoting fear of Chinese AGI while saying the US should race for AGI to achieve global dominance, but there are still plenty of Red Scare/Yellow Peril comments

Systemic opacity, state-driven censorship, and state control of the media means AGI development under direct or indirect CCP control would probably be less transparent than in the US, and the world may be less likely to learn about warning shots, wrongheaded decisions, reckless behaviour, etc. True, there was the Manhattan Project, but that was quite long ago; recent examples like the CCP's suppression of information related to the origins of COVID feel more salient and relevant.

22

I am still subscribed to slatestarcodex on reddit, and this piece of garbage popped up on my feed. I didn't actually read the whole thing, but basically the author correctly realizes Trump is ruining everything in the process of getting at "DEI" and "wokism", but instead of accepting the blame that rightfully falls on Scott Alexander and the author, deflects and blames the "left" elitists. (I put left in quote marks because the author apparently thinks establishment democrats are actually leftist, I fucking wish).

An illustrative quote (of Scott's that the author agrees with)

We wanted to be able to hold a job without reciting DEI shibboleths or filling in multiple-choice exams about how white people cause earthquakes. Instead we got a thousand scientific studies cancelled because they used the string “trans-” in a sentence on transmembrane proteins.

I don't really follow their subsequent points, they fail to clarify what they mean... In sofar as "left elites" actually refers to centrist democrats, I actually think the establishment Democrats do have a major piece of blame in that their status quo neoliberalism has been rejected by the public but the Democrat establishment refuse to consider genuinely leftist ideas, but that isn't the point this author is actually going for... the author is actually upset about Democrats "virtue signaling" and "canceling" and DEI, so they don't actually have a valid point, if anything the opposite of one.

In case my angry disjointed summary leaves you any doubt the author is a piece of shit:

it feels like Scott has been reading a lot of Richard Hanania, whom I agree with on a lot of points

For reference the ssc discussion: https://www.reddit.com/r/slatestarcodex/comments/1jyjc9z/the_edgelords_were_right_a_response_to_scott/

tldr; author trying to blameshift on Trump fucking everything up while keeping up the exact anti-progressive rhetoric that helped propel Trump to victory.

68

So despite the nitpicking they did of the Guardian Article, it seems blatantly clear now that Manifest 2024 was infested by racists. The post article doesn't even count Scott Alexander as "racist" (although they do at least note his HBD sympathies) and identify a count of full 8 racists. They mention a talk discussing the Holocaust as a Eugenics event (and added an edit apologizing for their simplistic framing). The post author is painfully careful and apologetic to distinguish what they personally experienced, what was "inaccurate" about the Guardian article, how they are using terminology, etc. Despite the author's caution, the comments are full of the classic SSC strategy of trying to reframe the issue (complaining the post uses the word controversial in the title, complaining about the usage of the term racist, complaining about the threat to their freeze peach and open discourse of ideas by banning racists, etc.).

0

This is a classic sequence post: (mis)appropriated Japanese phrases and cultural concepts, references to the AI box experiment, and links to other sequence posts. It is also especially ironic given Eliezer's recent switch to doomerism with his new phrases of "shut it all down" and "AI alignment is too hard" and "we're all going to die".

Indeed, with developments in NN interpretability and a use case of making LLM not racist or otherwise horrible, it seems to me like their is finally actually tractable work to be done (that is at least vaguely related to AI alignment)... which is probably why Eliezer is declaring defeat and switching to the podcast circuit.

view more: next ›

scruiser

0 post score
0 comment score
joined 2 years ago