[-] [email protected] 11 points 13 hours ago

Ok, maybe someone can help me here figure something out.

I've wondered for a long time about a strange adjacency which I sometimes observe between what I call (due to lack of a better term) "unix conservativism" and fascism. It's the strange phenomenon where ideas about "classic" and "pure" unix systems coincide with the worst politics. For example the "suckless" stuff. Or the ramblings of people like ESR. Criticism of systemd is sometimes infused with it (yes, there is plenty of valid criticism as well. But there's this other kind of criticism I've often seen, which is icky and weirdly personal). And I've also seen traces of this in discussions of programming languages newer than C, especially when topics like memory safety come up.

This is distinguished from retro computing and nostalgia and such, those are unrelated. If someone e.g. just likes old unix stuff, that's not what I mean.

You may already notice, I struggle a bit to come up with a clear definition and whether there really is a connection or just a loose set of examples that are not part of a definable set. So, is there really something there or am I seeing a connection that doesn't exist?

I've also so far not figured out what might create the connection. Ideas I have come up with are: appeal to times that are gone (going back to an idealized computing past that never existed), elitism (computers must not become user friendly), ideas of purity (an imaginary pure "unix philosophy").

Anyway, now with this new xlibre project, there's another one that fits into it...

[-] [email protected] 11 points 2 days ago
  • You will understand how to use AI tools for real-time employee engagement analysis
  • You will create personalized employee development plans using AI-driven analytics
  • You will learn to enhance employee well-being programs with AI-driven insights and recommendations

You will learn to create the torment nexus

  • You will prepare your career for your future work in a world with robots and AI

You will learn to live in the torment nexus

  • You will gain expertise in ethical considerations when implementing AI in HR practices

I assume it's a single slide that says "LOL who cares"

[-] [email protected] 11 points 2 days ago

Maybe someone has put into their heads that they have to "go with the times", because AI is "inevitable" and "here to stay". And if they don't adapt, AI would obsolete them. That Wikipedia would become irrelevant because their leadership was hostile to "progress" and rejected "emerging technology", just like Wikipedia obsoleted most of the old print encyclopedia vendors. And one day they would be blamed for it, because they were stuck in the past at a crucial moment. But if they adopt AI now, they might imagine, one day they will be praised as the visionaries who carried Wikipedia over to the next golden age of technology.

Of course all of that is complete bullshit. But instilling those fears ("use it now, or you will be left behind!") is a big part of the AI marketing messaging which is blasted everywhere non-stop. So I wouldn't be surprised if those are the brainworms in their heads.

[-] [email protected] 31 points 1 week ago

160,000 organisations, sending 251 million messages! [...] A message costs one cent. [...] Microsoft is forecast to spend $80 billion on AI in 2025.

No problem. To break even, they can raise prices just a little bit, from one cent per message to, uuh, $318 per message. I don't think that such a tiny price bump is going to reduce usage or scare away any customers, so they can just do that.

[-] [email protected] 28 points 1 week ago

From McCarthy's reply:

My current answer to the question of when machines will reach human-level intelligence is that a precise calculation shows that we are between 1.7 and 3.1 Einsteins and .3 Manhattan Projects away from the goal.

omg this statement sounds 100% like something that could be posted today by Sam Altman on X. It's hititing exactly the sweet spot between appearing precise but also super vague, like Altman's "a few thousand days".

[-] [email protected] 23 points 2 weeks ago

If the companies wanted to produce an LLM that didn’t output toxic waste, they could just not put toxic waste into it.

The article title and that part remind me of this quote from Charles Babbage in 1864:

On two occasions I have been asked, — "Pray, Mr. Babbage, if you put into the machine wrong figures, will the right answers come out?" In one case a member of the Upper, and in the other a member of the Lower, House put this question. I am not able rightly to apprehend the kind of confusion of ideas that could provoke such a question.

It feels as if Babbage had already interacted with today's AI pushers.

[-] [email protected] 23 points 1 month ago

I hate this position so much, claiming that it's because "the left" wanted "too much". That's not only morally bankrupt, it's factually wrong too. And also ignorant of historical examples. It's lazy and rotten thinking all the way through.

[-] [email protected] 28 points 4 months ago

"Shortly after 2027" is a fun phrasing. Means "not before 2028", but mentioning "2027" so it doesn't seem so far away.

I interpret it as "please bro, keep the bubble going bro, just 3 more years bro, this time for real bro"

[-] [email protected] 25 points 4 months ago

So much wrong with this...

In a way, it reminds me of the wave of entirely fixed/premade loop-based music making tools from years ago. Where you just drag and drop a number of pre-made loops from a library onto some tracks, and then the software automatically makes them fit together musically and that's it, no further skill or effort required. I always found that fun to play around with for an evening or two, but then it quickly got boring. Because the more you optimize away the creative process, the less interesting it becomes.

Now the AI bros have made it even more streamlined, which means it's even more boring. Great. Also, they appear to think that they are the first people to ever have the idea "let's make music making simple". Not surprising they believe that, because a fundamental tech bro belief is that history is never interesting and can never teach anything, so they never even look at it.

[-] [email protected] 28 points 5 months ago

Or they’ll be “AGI” — A Guy Instead.

Lol. This is perfect. Can we please adopt this everywhere.

As for the OpenAI statement... it's interesting how it starts with "We are now confident [...]" to make people think "ooh now comes the real stuff"... but then it quickly makes a sharp turn towards weasel words: "We believe that [...] we may see [...]" . I guess the idea is that the confidence from the first part is supposed to carry over to the second, while retaining a way to later say "look, we didn't promise anything for 2025". But then again, maybe I'm ascribing too much thoughtfulness here, when actually they just throw out random bullshit, just like their "AI".

[-] [email protected] 22 points 5 months ago

With your choice of words you are anthropomorphizing LLMs. No valid reasoning can occur when starting from a false point of origin.

Or to put it differently: to me this is similarly ridiculous as if you were arguing that bubble sort may somehow "gain new abilites" and do "horrifying things".

[-] [email protected] 21 points 8 months ago

I wonder if this signals being at peak hype soon. I mean, how much more outlandish can they get without destroying the hype bubble's foundation, i.e. the suspension of disbelief that all this would somehow become possible in the near future. We're on the level of "arrival of an alien intelligence" now, how much further can they escalate that rhetoric without popping the bubble?

view more: next ›

nightsky

0 post score
0 comment score
joined 9 months ago