[-] [email protected] 10 points 5 days ago

Legal anime streaming services herald the end times.

Yes, if it's not a fansub which randomly retains some Japanese words while overlaying half the picture with text to explain why it can't be translated due to finely nuanced meanings, then what are you even watching, the ASI (anime superintelligence) disapproves.

[-] [email protected] 9 points 5 days ago

But wait, the best(*) animes came out in the 90s, early 00s. What does that mean? Is the simulation running out of juice?

(*) details of my anime evaluation function are confidential and proprietary

[-] [email protected] 18 points 6 days ago

Also, happy Pride :3

Yes, happy pride month everyone!

I've decided that this year I'm going to be more open about this and wear a pride bracelet whenever I go in public this month. Including for (remote) work meetings where nobody knows... wonder if anyone will notice.

[-] [email protected] 31 points 1 week ago

160,000 organisations, sending 251 million messages! [...] A message costs one cent. [...] Microsoft is forecast to spend $80 billion on AI in 2025.

No problem. To break even, they can raise prices just a little bit, from one cent per message to, uuh, $318 per message. I don't think that such a tiny price bump is going to reduce usage or scare away any customers, so they can just do that.

[-] [email protected] 27 points 1 week ago

From McCarthy's reply:

My current answer to the question of when machines will reach human-level intelligence is that a precise calculation shows that we are between 1.7 and 3.1 Einsteins and .3 Manhattan Projects away from the goal.

omg this statement sounds 100% like something that could be posted today by Sam Altman on X. It's hititing exactly the sweet spot between appearing precise but also super vague, like Altman's "a few thousand days".

[-] [email protected] 23 points 2 weeks ago

If the companies wanted to produce an LLM that didn’t output toxic waste, they could just not put toxic waste into it.

The article title and that part remind me of this quote from Charles Babbage in 1864:

On two occasions I have been asked, — "Pray, Mr. Babbage, if you put into the machine wrong figures, will the right answers come out?" In one case a member of the Upper, and in the other a member of the Lower, House put this question. I am not able rightly to apprehend the kind of confusion of ideas that could provoke such a question.

It feels as if Babbage had already interacted with today's AI pushers.

[-] [email protected] 23 points 1 month ago

I hate this position so much, claiming that it's because "the left" wanted "too much". That's not only morally bankrupt, it's factually wrong too. And also ignorant of historical examples. It's lazy and rotten thinking all the way through.

[-] [email protected] 28 points 4 months ago

"Shortly after 2027" is a fun phrasing. Means "not before 2028", but mentioning "2027" so it doesn't seem so far away.

I interpret it as "please bro, keep the bubble going bro, just 3 more years bro, this time for real bro"

[-] [email protected] 25 points 4 months ago

So much wrong with this...

In a way, it reminds me of the wave of entirely fixed/premade loop-based music making tools from years ago. Where you just drag and drop a number of pre-made loops from a library onto some tracks, and then the software automatically makes them fit together musically and that's it, no further skill or effort required. I always found that fun to play around with for an evening or two, but then it quickly got boring. Because the more you optimize away the creative process, the less interesting it becomes.

Now the AI bros have made it even more streamlined, which means it's even more boring. Great. Also, they appear to think that they are the first people to ever have the idea "let's make music making simple". Not surprising they believe that, because a fundamental tech bro belief is that history is never interesting and can never teach anything, so they never even look at it.

[-] [email protected] 28 points 5 months ago

Or they’ll be “AGI” — A Guy Instead.

Lol. This is perfect. Can we please adopt this everywhere.

As for the OpenAI statement... it's interesting how it starts with "We are now confident [...]" to make people think "ooh now comes the real stuff"... but then it quickly makes a sharp turn towards weasel words: "We believe that [...] we may see [...]" . I guess the idea is that the confidence from the first part is supposed to carry over to the second, while retaining a way to later say "look, we didn't promise anything for 2025". But then again, maybe I'm ascribing too much thoughtfulness here, when actually they just throw out random bullshit, just like their "AI".

[-] [email protected] 22 points 5 months ago

With your choice of words you are anthropomorphizing LLMs. No valid reasoning can occur when starting from a false point of origin.

Or to put it differently: to me this is similarly ridiculous as if you were arguing that bubble sort may somehow "gain new abilites" and do "horrifying things".

[-] [email protected] 21 points 8 months ago

I wonder if this signals being at peak hype soon. I mean, how much more outlandish can they get without destroying the hype bubble's foundation, i.e. the suspension of disbelief that all this would somehow become possible in the near future. We're on the level of "arrival of an alien intelligence" now, how much further can they escalate that rhetoric without popping the bubble?

view more: next ›

nightsky

0 post score
0 comment score
joined 9 months ago