[-] [email protected] 19 points 1 week ago

Penny Arcade chimes in on corporate AI mandates:

[-] [email protected] 20 points 2 months ago

The coda is top tier sneer:

Maybe it’s useful to know that Altman uses a knife that’s showy but incohesive and wrong for the job; he wastes huge amounts of money on olive oil that he uses recklessly; and he has an automated coffee machine that claims to save labour while doing the exact opposite because it can’t be trusted. His kitchen is a catalogue of inefficiency, incomprehension, and waste. If that’s any indication of how he runs the company, insolvency cannot be considered too unrealistic a threat.

[-] [email protected] 20 points 3 months ago* (last edited 3 months ago)

Today in relevant skeets:

::: spoiler transcript Skeet: If you can clock who this is meant to be instantly you are on the computer the perfect amount. You’re doing fine don’t even worry about it.

Quoted skeet: 'Why are high fertility people always so weird?' A weekend with the pronatalists

Image: Egghead Jr. and Miss Prissy from Looney Tunes Foghorn Leghorn shorts.

[-] [email protected] 18 points 5 months ago* (last edited 5 months ago)

Saltman has a new blogpost out he calls 'Three Observations' that I feel too tired to sneer properly but I'm sure will be featured in pivot-to-ai pretty soon.

Of note that he seems to admit chatbot abilities have plateaued for the current technological paradigm, by way of offering the "observation" that model intelligence is logarithmically dependent on the resources used to train and run it (i = log( r )) so it's officially diminishing returns from now on.

Second observation is that when a thing gets cheaper it's used more, i.e. they'll be pushing even harded to shove it into everything.

Third observation is that

The socioeconomic value of linearly increasing intelligence is super-exponential in nature. A consequence of this is that we see no reason for exponentially increasing investment to stop in the near future.

which is hilarious.

The rest of the blogpost appears to mostly be fanfiction about the efficiency of their agents that I didn't read too closely.

[-] [email protected] 20 points 7 months ago* (last edited 7 months ago)

I mean, you could have answered by naming one fabled new ability LLM's suddenly 'gained' instead of being a smarmy tadpole, but you didn't.

[-] [email protected] 20 points 8 months ago

No shot is over two seconds, because AI video can’t keep it together longer than that. Animals and snowmen visibly warp their proportions even over that short time. The trucks’ wheels don’t actually move. You’ll see more wrong with the ad the more you look.

Not to mention the weird AI lighting that makes everything look fake and unnatural even in the ad's dreamlike context, and also that it's the most generic and uninspired shit imaginable.

[-] [email protected] 18 points 10 months ago* (last edited 10 months ago)

Stephanie Sterling of the Jimquisition outlines the thinking involved here. Well, she swears at everyone involved for twenty minutes. So, Steph.

She seems to think the AI generates .WAD files.

I guess they fell victim to one of the classic blunders: never assume that it can't be that stupid, and someone must be explaining it wrong.

[-] [email protected] 18 points 1 year ago

IKR like good job making @dgerard look like King Mob from the Invisibles in your header image.

If the article was about me I'd be making Colin Robinson feeding noises all the way through.

edit: Obligatory only 1 hour 43 minutes of reading to go then

[-] [email protected] 19 points 1 year ago

It hasn't worked 'well' for computers since like the pentium, what are you talking about?

The premise was pretty dumb too, as in, if you notice that a (very reductive) technological metric has been rising sort of exponentially, you should probably assume something along the lines of we're probably still at the low hanging fruit stage of R&D, it'll stabilize as it matures, instead of proudly proclaiming that surely it'll approach infinity and break reality.

There's nothing smart or insightful about seeing a line in a graph trending upwards and assuming it's gonna keep doing that no matter what. Not to mention that type of decontextualized wishful thinking is emblematic of the TREACLES mindset mentioned in the community's blurb that you should check out.

So yeah, he thought up the Singularity which is little more than a metaphysical excuse to ignore regulations and negative externalities because with tech rupture around the corner any catastrophic mess we make getting there won't matter. See also: the whole current AI debacle.

[-] [email protected] 19 points 1 year ago

Before we accidentally make an AI capable of posing existential risk to human being safety, perhaps we should find out how to build effective safety measures first.

You make his position sound way more measured and responsible than it is.

His 'effective safety measures' are something like A) solve ethics B) hardcode the result into every AI, I.e. garbage philosophy meets garbage sci-fi.

[-] [email protected] 20 points 1 year ago* (last edited 1 year ago)

So LLM-based AI is apparently such a dead end as far as non-spam and non-party trick use cases are concerned that they are straight up rolling out anti-features that nobody asked or wanted just to convince shareholders that ground breaking stuff is still going on, and somewhat justify the ocean of money they are diverting that way.

At least it's only supposed to work on PCs that incorporate so-called neural processor units, which if I understand correctly is going to be its own thing under a Windows PC branding.

edit: Yud must love that instead of his very smart and very implementable idea of the government enforcing strict regulations on who gets to own GPUs and bombing non-compliants we seem to instead be trending towards having special deep learning facilitating hardware integrated in every new device, or whatever NPUs actually are, starting with iPhones and so-called Windows PCs.

edit edit: the branding appears to be "Copilot+ PCs" not windows pcs.

[-] [email protected] 19 points 1 year ago* (last edited 1 year ago)

Sticking numbers next to things and calling it a day is basically the whole idea behind bayesian rationalism.

view more: ‹ prev next ›

Architeuthis

0 post score
0 comment score
joined 2 years ago