[-] lurker@awful.systems 5 points 19 hours ago

It appears that Anthropic vs the Pentagon is going to happen right on the heels of Altman vs Musk, which is spicy

"While the Musk-OpenAI courtroom showdown has been billed as the first great technology trial of the AI era, a legal showdown that matters far more will take place two weeks from now in a courtroom in Washington, D.C. That’s when a federal appeals court panel will hear arguments in Anthropic’s challenge to the ‘supply chain risk’ designation the Trump Administration slapped on it for refusing to agree to its specified contract terms for providing its AI models to the U.S. military. That’s a case with huge implications not just for Anthropic and the fate of the AI industry, but also for the balance of power between the state and industry more generally."

[-] lurker@awful.systems 6 points 22 hours ago

the funniest bit so far is probably that Greg Brockman’s (who mind you is a massive Trump supporter, being a top donor to him) diary essentially vindicated Elmo’s whole case against OpenAI. You gotta love when morons shoot themselves in the foot

[-] lurker@awful.systems 7 points 1 day ago* (last edited 1 day ago)

the current state of Altman vs Musk

and this one:https://abc7news.com/live-updates/elon-musk-sam-altman-live-updates-trial-enters-2nd-week-focus-shifting-openai-president-greg-brockman/19036397/entry/19039981/

My take on this case is a resounding "everyone sucks here" but I must say I am hoping for OpenAI to lose this, since Scam Slopman deserves to be sacked, and the AI bubble deserves a good shake

[-] lurker@awful.systems 4 points 2 days ago* (last edited 2 days ago)

Wow, that’s probably one of the most in-depth critiques of the book I’ve read. Kudos to the OP

[-] lurker@awful.systems 8 points 4 days ago* (last edited 4 days ago)

it gets better/worse: this (might) is the paid version, meaning Yud gave money to the companies he think will kill us all to use the technology he thinks will kill us all to win an argument on shitter.

[-] lurker@awful.systems 10 points 4 days ago

this gem over on the other SneerClub (the context in the comments)

funny how so many anti-AGI people are willing to use AI image generators to make memes about how they’re right

[-] lurker@awful.systems 4 points 5 days ago

ah yes, random lines that go up into infinity, can’t have an AI video without em. Bonus points because thats like his fifth video about an AI takeover scenario, all of which have similar thumbnails

[-] lurker@awful.systems 6 points 5 days ago

got jumpscared by this while scrolling

[-] lurker@awful.systems 9 points 6 days ago* (last edited 6 days ago)

All roads lead to ads

[-] lurker@awful.systems 12 points 6 days ago

The Elon Musk vs OpenAI lawsuit is going ahead, I personally hope both parties loose every dime and get laughed at even long after they die

87
submitted 1 month ago* (last edited 1 month ago) by lurker@awful.systems to c/techtakes@awful.systems
63

Originally posted in the Stubsack, but decided to make it its own post because why not

12
submitted 2 months ago* (last edited 2 months ago) by lurker@awful.systems to c/sneerclub@awful.systems

this was already posted on reddit sneerclub, but I decided to crosspost it here so you guys wouldn’t miss out on Yudkowsky calling himself a genre savy character, and him taking what appears to be a shot at the Zizzians

27
submitted 3 months ago* (last edited 3 months ago) by lurker@awful.systems to c/sneerclub@awful.systems

originally posted in the thread for sneers not worth a whole post, then I changed my mind and decided it is worth a whole post, cause it is pretty damn important

Posted on r/HPMOR roughly one day ago

full transcript:

Epstein asked to call during a fundraiser. My notes say that I tried to explain AI alignment principles and difficulty to him (presumably in the same way I always would) and that he did not seem to be getting it very much. Others at MIRI say (I do not remember myself / have not myself checked the records) that Epstein then offered MIRI $300K; which made it worth MIRI's while to figure out whether Epstein was an actual bad guy versus random witchhunted guy, and ask if there was a reasonable path to accepting his donations causing harm; and the upshot was that MIRI decided not to take donations from him. I think/recall that it did not seem worthwhile to do a whole diligence thing about this Epstein guy before we knew whether he was offering significant funding in the first place, and then he did, and then MIRI people looked further, and then (I am told) MIRI turned him down.

Epstein threw money at quite a lot of scientists and I expect a majority of them did not have a clue. It's not standard practice among nonprofits to run diligence on donors, and in fact I don't think it should be. Diligence is costly in executive attention, it is relatively rare that a major donor is using your acceptance of donations to get social cover for an island-based extortion operation, and this kind of scrutiny is more efficiently centralized by having professional law enforcement do it than by distributing it across thousands of nonprofits.

In 2009, MIRI (then SIAI) was a fiscal sponsor for an open-source project (that is, we extended our nonprofit status to the project, so they could accept donations on a tax-exempt basis, having determined ourselves that their purpose was a charitable one related to our mission) and they got $50K from Epstein. Nobody at SIAI noticed the name, and since it wasn't a donation aimed at SIAI itself, we did not run major-donor relations about it.

This reply has not been approved by MIRI / carefully fact-checked, it is just off the top of my own head.

35

I searched for “eugenics” on yud’s xcancel (i will never use twitter, fuck you elongated muskrat) because I was bored, got flashbanged by this gem. yud, genuinely what are you talking about

view more: next ›

lurker

0 post score
0 comment score
joined 3 months ago