[-] [email protected] 24 points 2 weeks ago* (last edited 2 weeks ago)

To get a bit meta for a minute, you don't really need to.

The first time a substantial contribution to a serious issue in an important FOSS project is made by an LLM with no conditionals, the pr people of the company that trained it are going to make absolutely sure everyone and their fairy godmother knows about it.

Until then it's probably ok to treat claims that chatbots can handle a significant bulk of non-boilerplate coding tasks in enterprise projects by themselves the same as claims of haunted houses; you don't really need to debunk every separate witness testimony, it's self evident that a world where there is an afterlife that also freely intertwines with daily reality would be notably and extensively different to the one we are currently living in.

[-] [email protected] 24 points 2 months ago* (last edited 2 months ago)

That's the second model announcement in a row by the major LLM vendor where the supposed advantage over the current state of the art is presented as... better vibes. He actually doesn't even call the output good, just successfully metafictional.

Meanwhile over at anthropic Dario just declared that we're about 12 months before all written computer code is AI generated, and 90% percent of all code by the summer.

This is not a serious industry.

[-] [email protected] 26 points 5 months ago* (last edited 5 months ago)

Rationalist debatelord org Rootclaim, who in early 2024 lost a $100K bet by failing to defend covid lab leak theory against a random ACX commenter, will now debate millionaire covid vaccine truther Steve Kirsch on whether covid vaccines killed more people than they saved, the loser gives up $1M.

One would assume this to be a slam dunk, but then again one would assume the people who founded an entire organization about establishing ground truths via rationalist debate would actually be good at rationally debating.

[-] [email protected] 30 points 5 months ago

It's useful insofar as you can accommodate its fundamental flaw of randomly making stuff the fuck up, say by having a qualified expert constantly combing its output instead of doing original work, and don't mind putting your name on low quality derivative slop in the first place.

[-] [email protected] 27 points 8 months ago

Archive the weights of the models we build today, so we can rebuild them in the future if we need to recompense them for moral harms.

To be clear, this means that if you treat someone like shit all their life, saying you're sorry to their Sufficiently Similar Simulation™ like a hundred years after they are dead makes it ok.

This must be one of the most blatantly supernatural rationalist Accepted Truths, that if your simulation is of sufficiently high fidelity you will share some ontology of self with it, which by the way is how the basilisk can torture you even if you've been dead for centuries.

[-] [email protected] 31 points 11 months ago

I'm not spending the additional 34min apparently required to find out what in the world they think neural network training actually is that it could ever possibly involve strategy on the part of the network, but I'm willing to bet it's extremely dumb.

I'm almost certain I've seen EY catch shit on twitter (from actual ml researchers no less) for insinuating something very similar.

[-] [email protected] 27 points 11 months ago* (last edited 11 months ago)

It's a sad fate that sometimes befalls engineers who are good at talking to audiences, and who work for a big enough company that can afford to have that be their primary role.

edit: I love that he's chief evangelist though, like he has a bunch of little google cloud clerics running around doing chores for him.

[-] [email protected] 26 points 11 months ago* (last edited 11 months ago)

Honestly, the evident plethora of poor programming practices is the least notable thing about all this; using roided autocomplete to cut corners was never going to be a well calculated decision, it's always the cherry on top of a shit-cake.

[-] [email protected] 32 points 1 year ago* (last edited 1 year ago)

There's an actual explanation in the original article about some of the wardrobe choices. It's even dumber, and it involves effective altruism.

It is a very cold home. It’s early March, and within 20 minutes of being here the tips of some of my fingers have turned white. This, they explain, is part of living their values: as effective altruists, they give everything they can spare to charity (their charities). “Any pointless indulgence, like heating the house in the winter, we try to avoid if we can find other solutions,” says Malcolm. This explains Simone’s clothing: her normal winterwear is cheap, high-quality snowsuits she buys online from Russia, but she can’t fit into them now, so she’s currently dressing in the clothes pregnant women wore in a time before central heating: a drawstring-necked chemise on top of warm underlayers, a thick black apron, and a modified corset she found on Etsy. She assures me she is not a tradwife. “I’m not dressing trad now because we’re into trad, because before I was dressing like a Russian Bond villain. We do what’s practical.”

[-] [email protected] 33 points 1 year ago

This was such a chore to read, it's basically quirk-washing TREACLES. This is like a major publication deciding to take an uncritical look at scientology focusing on the positive vibes and the camaraderie, while stark in the middle of operation snow white, which in fact I bet happened a lot at the time.

The doomer scene may or may not be a delusional bubble—we’ll find out in a few years

Fuck off.

The doomers are aware that some of their beliefs sound weird, but mere weirdness, to a rationalist, is neither here nor there. MacAskill, the Oxford philosopher, encourages his followers to be “moral weirdos,” people who may be spurned by their contemporaries but vindicated by future historians. Many of the A.I. doomers I met described themselves, neutrally or positively, as “weirdos,” “nerds,” or “weird nerds.” Some of them, true to form, have tried to reduce their own weirdness to an equation. “You have a set amount of ‘weirdness points,’ ” a canonical post advises. “Spend them wisely.”

The weirdness is eugenics and the repugnant conclusion, and abusing bayes rule to sidestep context and take epistimological shortcuts to cuckoo conclusions while fortifying a bubble of accepted truths that are strangely amenable to allowing rich people to do whatever the hell they want.

Writing a 7-8000 word insider expose on TREACLES without mentioning eugenics even once throughout should be all but impossible, yet here we are.

[-] [email protected] 27 points 1 year ago

birdsite stuff:

A rationalist organization offered a James Randi-style $100k prize to anyone who could defeat them in a structured longform debate and prove COVID had a natural origin, so a rando Slate Star Codex commenter took them up on it and absolutely destroyed them. You won't believe what happened next (they wrote a pissy blogpost claiming the handpicked judges had "errors in ... probabilistic inference" for not agreeing with their conclusion and grew even more confident in their incorrect opinion)

[-] [email protected] 24 points 1 year ago

Had to google shit-test, apparently it's a PUA term, what a surprise.

20
submitted 1 year ago by [email protected] to c/[email protected]
148
submitted 2 years ago by [email protected] to c/[email protected]

Sam Altman, the recently fired (and rehired) chief executive of Open AI, was asked earlier this year by his fellow tech billionaire Patrick Collison what he thought of the risks of synthetic biology. ‘I would like to not have another synthetic pathogen cause a global pandemic. I think we can all agree that wasn’t a great experience,’ he replied. ‘Wasn’t that bad compared to what it could have been, but I’m surprised there has not been more global coordination and I think we should have more of that.’

3
submitted 2 years ago* (last edited 2 years ago) by [email protected] to c/[email protected]

original is here, but you aren't missing any context, that's the twit.

I could go on and on about the failings of Shakespear... but really I shouldn't need to: the Bayesian priors are pretty damning. About half the people born since 1600 have been born in the past 100 years, but it gets much worse that that. When Shakespear wrote almost all Europeans were busy farming, and very few people attended university; few people were even literate -- probably as low as ten million people. By contrast there are now upwards of a billion literate people in the Western sphere. What are the odds that the greatest writer would have been born in 1564? The Bayesian priors aren't very favorable.

edited to add this seems to be an excerpt from the fawning book the big short/moneyball guy wrote about him that was recently released.

view more: ‹ prev next ›

Architeuthis

0 post score
0 comment score
joined 2 years ago