[-] lurker@awful.systems 1 points 7 minutes ago

Pete winning here is gonna set a really bad precedent, plus with how destructive the fuck-ups will be. So I'm pretty concerned about this

[-] lurker@awful.systems 1 points 16 minutes ago* (last edited 14 minutes ago)

Just read this piece by Gary Marcus on this topic. Jesus fucking Christ this administration sucks so fucking bad. As a matter of fact 'sucks bad' is an understatement for the sheer amount of damage this administration is causing

[-] lurker@awful.systems 3 points 20 hours ago

If I had to list every single worldwide problem right now, Trump would be connected to at least 80% of that list

[-] lurker@awful.systems 5 points 23 hours ago* (last edited 22 hours ago)

the US Government slides further into technofascism via throwing chatbots into the military https://garymarcus.substack.com/p/code-red-for-humanity you don't hate this administration enough

[-] lurker@awful.systems 9 points 1 day ago

but right now it doesn't feel like LW is a place where I can collaboratively make intellectual progress on this very important topic.

Maybe that’s because people don’t feel comfortable when you say you should teach an AI race/iq pseudoscience????

[-] lurker@awful.systems 12 points 2 days ago* (last edited 2 days ago)

Literally everything I've heard about OpenClaw is it fucking up and being a security risk. A sensible person would realise this means it's unreliable and shouldn't be left to take care of important tasks (or really just tasks in general), but AI boosters are typically not sensible people. And the guy who made it got a job at OpenAI, so this is far from the end..

[-] lurker@awful.systems 14 points 3 days ago

this is like the fourth time an AI agent has completely deleted something important (I remember an article about an AI deleting all of a scientists’s research) How many more times does it have to happen before people stop using AI to look after something important???

[-] lurker@awful.systems 5 points 3 days ago* (last edited 3 days ago)

sharing this channel’s posts are the equivalent of shooting fish in a barrel but http://youtube.com/post/UgkxoSpDpLNEr9WawVXnl5Mlw4NeQ6-XsLjl this really just feels like an excuse to repost that METR graph. also wtf is the graph on top

[-] lurker@awful.systems 12 points 3 days ago

it’s always the Elon Musk fans isnt it.

and on the topic of Futurism articles on Elon Musk: https://futurism.com/future-society/court-trouble-jury-hates-elon-musk

one word: LMFAOOOO

[-] lurker@awful.systems 7 points 3 days ago* (last edited 3 days ago)

oh yeah I 100% agree that their methodology is flawed, and that blog does a pretty good job of outlining the issues. I just thought the absolutely huge gap was both interesting and funny. Their absolutely huge error bars are not a good sign, between that and the gap it really feels like someone screwed up

12
submitted 2 weeks ago* (last edited 2 weeks ago) by lurker@awful.systems to c/sneerclub@awful.systems

this was already posted on reddit sneerclub, but I decided to crosspost it here so you guys wouldn’t miss out on Yudkowsky calling himself a genre savy character, and him taking what appears to be a shot at the Zizzians

26
submitted 3 weeks ago* (last edited 3 weeks ago) by lurker@awful.systems to c/sneerclub@awful.systems

originally posted in the thread for sneers not worth a whole post, then I changed my mind and decided it is worth a whole post, cause it is pretty damn important

Posted on r/HPMOR roughly one day ago

full transcript:

Epstein asked to call during a fundraiser. My notes say that I tried to explain AI alignment principles and difficulty to him (presumably in the same way I always would) and that he did not seem to be getting it very much. Others at MIRI say (I do not remember myself / have not myself checked the records) that Epstein then offered MIRI $300K; which made it worth MIRI's while to figure out whether Epstein was an actual bad guy versus random witchhunted guy, and ask if there was a reasonable path to accepting his donations causing harm; and the upshot was that MIRI decided not to take donations from him. I think/recall that it did not seem worthwhile to do a whole diligence thing about this Epstein guy before we knew whether he was offering significant funding in the first place, and then he did, and then MIRI people looked further, and then (I am told) MIRI turned him down.

Epstein threw money at quite a lot of scientists and I expect a majority of them did not have a clue. It's not standard practice among nonprofits to run diligence on donors, and in fact I don't think it should be. Diligence is costly in executive attention, it is relatively rare that a major donor is using your acceptance of donations to get social cover for an island-based extortion operation, and this kind of scrutiny is more efficiently centralized by having professional law enforcement do it than by distributing it across thousands of nonprofits.

In 2009, MIRI (then SIAI) was a fiscal sponsor for an open-source project (that is, we extended our nonprofit status to the project, so they could accept donations on a tax-exempt basis, having determined ourselves that their purpose was a charitable one related to our mission) and they got $50K from Epstein. Nobody at SIAI noticed the name, and since it wasn't a donation aimed at SIAI itself, we did not run major-donor relations about it.

This reply has not been approved by MIRI / carefully fact-checked, it is just off the top of my own head.

34

I searched for “eugenics” on yud’s xcancel (i will never use twitter, fuck you elongated muskrat) because I was bored, got flashbanged by this gem. yud, genuinely what are you talking about

view more: next ›

lurker

0 post score
0 comment score
joined 1 month ago