[-] lurker@awful.systems 9 points 1 day ago

Wait, if im recalling right a former StopAI activist went on the run after threatening an OpenAI lab and causing it to shut down, it mentions the suspect has threatened an OpenAI lab before so could it be the same guy??

[-] lurker@awful.systems 9 points 1 day ago

oh cool, a racist wacko. yay…

[-] lurker@awful.systems 7 points 3 days ago

Anthropic’s latest model that they haven’t released to the public yet since they’re worried its gonna fuck up cybersecurity this thread goes over it a bit

[-] lurker@awful.systems 10 points 5 days ago

My CEO who is a known hype-man is a massive liar? shock horror

seriously, anyone who listens to Scam Altman these days is an idiot

[-] lurker@awful.systems 28 points 3 weeks ago

One of the solutions proposed — I am not kidding — is “writing scripts to automate repetitive tasks.” It’s really funny imagining a software engineer being like “woah … like automating the boring stuff, you might say?”

If I'm getting this right, they're going to cut the cost of automating everything....by automating more things?

[-] lurker@awful.systems 17 points 1 month ago

I mean after the Epstein Files you have to be either deliberately ignorant or incredibly dense to not realise the rich get off easily

86
submitted 1 month ago* (last edited 1 month ago) by lurker@awful.systems to c/techtakes@awful.systems
62

Originally posted in the Stubsack, but decided to make it its own post because why not

[-] lurker@awful.systems 30 points 1 month ago* (last edited 1 month ago)

Incredibly ballsy move to keep using their tech after you literally branded them a supply chain threat and implied you would take legal action against them, but that’s this administration for ya

(they did say there would be a six-month phase out period after which if Anthropic still didn’t comply, they’d force them to, but still)

[-] lurker@awful.systems 17 points 1 month ago

this is like the fourth time an AI agent has completely deleted something important (I remember an article about an AI deleting all of a scientists’s research) How many more times does it have to happen before people stop using AI to look after something important???

12
submitted 1 month ago* (last edited 1 month ago) by lurker@awful.systems to c/sneerclub@awful.systems

this was already posted on reddit sneerclub, but I decided to crosspost it here so you guys wouldn’t miss out on Yudkowsky calling himself a genre savy character, and him taking what appears to be a shot at the Zizzians

[-] lurker@awful.systems 16 points 2 months ago

Eliezer, I would be very careful about talking about age of consent if I were you

[-] lurker@awful.systems 16 points 2 months ago

bootlicking billionaires when they're the main ones supporting the thing you say is an existential threat is definitely a choice. rationalists seem to be getting more and more mask off in the face of the trump administration

27
submitted 2 months ago* (last edited 2 months ago) by lurker@awful.systems to c/sneerclub@awful.systems

originally posted in the thread for sneers not worth a whole post, then I changed my mind and decided it is worth a whole post, cause it is pretty damn important

Posted on r/HPMOR roughly one day ago

full transcript:

Epstein asked to call during a fundraiser. My notes say that I tried to explain AI alignment principles and difficulty to him (presumably in the same way I always would) and that he did not seem to be getting it very much. Others at MIRI say (I do not remember myself / have not myself checked the records) that Epstein then offered MIRI $300K; which made it worth MIRI's while to figure out whether Epstein was an actual bad guy versus random witchhunted guy, and ask if there was a reasonable path to accepting his donations causing harm; and the upshot was that MIRI decided not to take donations from him. I think/recall that it did not seem worthwhile to do a whole diligence thing about this Epstein guy before we knew whether he was offering significant funding in the first place, and then he did, and then MIRI people looked further, and then (I am told) MIRI turned him down.

Epstein threw money at quite a lot of scientists and I expect a majority of them did not have a clue. It's not standard practice among nonprofits to run diligence on donors, and in fact I don't think it should be. Diligence is costly in executive attention, it is relatively rare that a major donor is using your acceptance of donations to get social cover for an island-based extortion operation, and this kind of scrutiny is more efficiently centralized by having professional law enforcement do it than by distributing it across thousands of nonprofits.

In 2009, MIRI (then SIAI) was a fiscal sponsor for an open-source project (that is, we extended our nonprofit status to the project, so they could accept donations on a tax-exempt basis, having determined ourselves that their purpose was a charitable one related to our mission) and they got $50K from Epstein. Nobody at SIAI noticed the name, and since it wasn't a donation aimed at SIAI itself, we did not run major-donor relations about it.

This reply has not been approved by MIRI / carefully fact-checked, it is just off the top of my own head.

[-] lurker@awful.systems 18 points 2 months ago

it’s all coming together. every single techbro and current government moron, they all loop back around to epstein in the end

35

I searched for “eugenics” on yud’s xcancel (i will never use twitter, fuck you elongated muskrat) because I was bored, got flashbanged by this gem. yud, genuinely what are you talking about

view more: next ›

lurker

0 post score
0 comment score
joined 2 months ago