lemming investors be like:
Fellas, 2023 called. Dan (and Eric Schmidt wtf, Sinophobia this man down bad) has gifted us with a new paper and let me assure you, bombing the data centers is very much back on the table.
"Superintelligence is destabilizing. If China were on the cusp of building it first, Russia or the US would not sit idly by—they'd potentially threaten cyberattacks to deter its creation.
@ericschmidt @alexandr_wang and I propose a new strategy for superintelligence. 🧵
Some have called for a U.S. AI Manhattan Project to build superintelligence, but this would cause severe escalation. States like China would notice—and strongly deter—any destabilizing AI project that threatens their survival, just as how a nuclear program can provoke sabotage. This deterrence regime has similarities to nuclear mutual assured destruction (MAD). We call a regime where states are deterred from destabilizing AI projects Mutual Assured AI Malfunction (MAIM), which could provide strategic stability. Cold War policy involved deterrence, containment, nonproliferation of fissile material to rogue actors. Similarly, to address AI's problems (below), we propose a strategy of deterrence (MAIM), competitiveness, and nonproliferation of weaponizable AI capabilities to rogue actors. Competitiveness: China may invade Taiwan this decade. Taiwan produces the West's cutting-edge AI chips, making an invasion catastrophic for AI competitiveness. Securing AI chip supply chains and domestic manufacturing is critical. Nonproliferation: Superpowers have a shared interest to deny catastrophic AI capabilities to non-state actors—a rogue actor unleashing an engineered pandemic with AI is in no one's interest. States can limit rogue actor capabilities by tracking AI chips and preventing smuggling. "Doomers" think catastrophe is a foregone conclusion. "Ostriches" bury their heads in the sand and hope AI will sort itself out. In the nuclear age, neither fatalism nor denial made sense. Instead, "risk-conscious" actions affect whether we will have bad or good outcomes."
Dan literally believed 2 years ago that we should have strict thresholds on model training over a certain size lest big LLM would spawn super intelligence (thresholds we have since well passed, somehow we are not paper clip soup yet). If all it takes to make super-duper AI is a big data center, then how the hell can you have mutually assured destruction like scenarios? You literally cannot tell what they are doing in a data center from the outside (maybe a building is using a lot of energy, but not like you can say, "oh they are running they are about to run superintelligence.exe, sabotage the training run" ) MAD "works" because it's obvious the nukes are flying from satellites. If the deepseek team is building skynet in their attic for 200 bucks, this shit makes no sense. Ofc, this also assumes one side will have a technology advantage, which is the opposite of what we've seen. The code to make these models is a few hundred lines! There is no moat! Very dumb, do not show this to the orangutan and muskrat. Oh wait! Dan is Musky's personal AI safety employee, so I assume this will soon be the official policy of the US.
link to bs: https://xcancel.com/DanHendrycks/status/1897308828284412226#m
was just in a chat room with an anthropic employee and she said, "if you have a solution for x, we are hiring" and before I could even say, "why would I want to work for a cult?" she literally started saying "some people underestimate the super exponential of progress"
To which I replied, "the only super exponential I'm seeing rn is Anthropic's negative revenue." She didn't block me, so she's a good sport, but yeah, they are all kool-aid drinkers for sure.
fuck man, this was bad enough that people outside the sneerverse were talking about this around me irl
Made the fatal mistake of posting a sneer on my main, only to have my friend let me know they had been assigned the same dorm room as Dan. Same friend was later roommates with my wife's best friend (and former cohabitant). Small world!
Spotted in the Wild:
smh they really do be out here believing there's a little man in the machine with goals and desires, common L for these folks
The American electorate has just covered itself with gasoline because eggs cost 2 dollars more. Come January they strike the match. gg. HATE. LET ME TELL YOU HOW MUCH I'VE COME TO HATE YOU SINCE NOVEMBER 5TH. My only consolation is that I'll hopefully get to watch some of the Magas/non voters/vote-your-conscience peeps suffer before the end. But Ol musky and peter thiel will be in their gilded bunkers while the fires consume us all.
Was salivating all weekend waiting for this to drop, from Subbarao Kambhampati's group:
Ladies and gentlemen, we have achieved block stacking abilities. It is a straight shot from here to cold fusion! ... unfortunately, there is a minor caveat:
Looks like performance drops like a rock as number of steps required increases...
I literally just saw a xitter post about how the exploding pagers in Lebanon is actually a microcosm of how a 'smarter' entity (the yahood) can attack a 'dumber' entity, much like how AGI will unleash the diamond bacterium to simultaneously kill all of humanity.
Which again, both entities are humans- they have the same intelligence you twats. Same argument people make all the time w.r.t. Spanish v Aztecs where gunpowder somehow made Cortez and company gigabrains compared to the lowly indigenous people (and totally ignoring the contributions of the real super intelligent entity: the small pox virus).
Unbelievably gross 🤢 I can't even begin to imagine what kind of lunatic would treat their loved one's worth as 'just a number' or commodity to be exchanged. Frightening to think these are the folks trying to influence govt officials.
BigMuffin69
0 post score0 comment score
Unclear to me what Daniel actually did as a 'researcher' besides draw a curve going up on a chalkboard (true story, the one interaction I had with LeCun was showing him Daniel's LW acct that is just singularity posting and Yann thought it was big funny). I admit, I am guilty of engineer gatekeeping posting here, but I always read Danny boy as a guy they hired to give lip service to the whole "we are taking safety very seriously, so we hired LW philosophers" and then after Sam did the uno reverse coup, he dropped all pretense of giving a shit/ funding their fan fac circles.
Ex-OAI "governance" researcher just means they couldn't forecast that they were the marks all along. This is my belief, unless he reveals that he superforecasted altman would coup and sideline him in 1998. Someone please correct me if I'm wrong, and they have evidence that Daniel actually understands how computers work.