[-] nightsky@awful.systems 6 points 20 hours ago

This was very enjoyable! Actually was over too quickly, would have liked to hear you two talk about AI stuff more.

[-] nightsky@awful.systems 6 points 3 days ago

Honest question, since I’m not on linkedin (and kinda looking for a new job): does it really help anyone find a job? It has been my impression from the outside that it’s mostly empty drivel.

[-] nightsky@awful.systems 6 points 3 days ago

I’m confused that anyone thinks that the world needs another linkedin…

[-] nightsky@awful.systems 9 points 5 days ago

Wow. The mental contortion required to come up with that idea is too much for me to think of a sneer.

[-] nightsky@awful.systems 15 points 6 days ago

When all the worst things come together: ransomware probably vibe-coded, discards private key, data never recoverable

During execution, the malware regenerates a new RSA key pair locally, uses the newly generated key material for encryption, and then discards the private key.

Halcyon assesses with moderate confidence that the developers may have used AI-assisted tooling, which could have contributed to this implementation error.

Source

[-] nightsky@awful.systems 6 points 6 days ago

Claim 1: Every regular LLM user is undergoing “AI psychosis”. Every single one of them, no exceptions.

I wouldn't go as far as using the "AI psychosis" term here, I think there is more than a quantitative difference. One is influence, maybe even manipulation, but the other is a serious mental health condition.

I think that regular interaction with a chatbot will influence a person, just like regular interaction with an actual person does. I don't believe that's a weakness of human psychology, but that it's what allows us to build understanding between people. But LLMs are not people, so whatever this does to the brain long term, I'm sure it's not good. Time for me to be a total dork and cite an anime quote on human interaction: "I create them as they create me" -- except that with LLMs, it actually goes only in one direction... the other direction is controlled by the makers of the chatbots. And they have a bunch of dials to adjust the output style at any time, which is an unsettling prospect.

while atrophying empathy

This possibility is to me actually the scariest part of your post.

[-] nightsky@awful.systems 9 points 6 days ago

Have a quick recovery! It sucks that society has collectively given up on trying to mitigate its spread.

[-] nightsky@awful.systems 32 points 6 months ago

I need to rant about yet another SV tech trend which is getting increasingly annoying.

It's something that is probably less noticeable if you live in a primarily English-speaking region, but if not, there is this very annoying thing that a lot of websites from US tech companies do now, which is that they automatically translate content, without ever asking. So English is pretty big on the web, and many English websites are now auto-translated to German for me. And the translations are usually bad. And by that I mean really fucking bad. (And I'm not talking about the translation feature in webbrowsers, it's the websites themselves.)

Small example of a recent experience: I was browsing stuff on Etsy, and Etsy is one of the websites which does this now. Entire product pages with titles and descriptions and everything is auto-translated, without ever asking me if I want that.

On a product page I then saw:

Material: gefühlt

This was very strange... because that makes no sense at all. "Gefühlt" is a form (participle) of the verb "fühlen", which means "to feel". It can be used in a past tense form of the verb.

So, to make sense of this you first have to translate that back to English, the past tense "to feel" as "felt". And of course "felt" can also mean a kind of fabric (which in German is called "Filz"), so it's a word with more than one meaning in English. You know, words with multiple meanings, like most words in any language. But the brilliant SV engineers do not seem to understand that you cannot translate words without the context they're in.

And this is not a singular experience. Many product descriptions on Etsy are full of such mistakes now, sometimes to the point of being downright baffling. And Ebay does the same now, and the translated product titles and descriptions are a complete shit show as well.

And Youtube started replacing the audio of English videos by default with AI-auto-generated translations spoken by horrible AI voices. By default! It's unbearable. At least there's a button to switch back to the original audio, but I keep having to press it. And now Youtube Shorts is doing it too, except that the YT Shorts video player does not seem to have any button to disable it at all!

Is it that unimaginable for SV tech that people speak more than one language? And that maybe you fucking ask before shoving a horribly bad machine translation into people's faces?

[-] nightsky@awful.systems 31 points 8 months ago

160,000 organisations, sending 251 million messages! [...] A message costs one cent. [...] Microsoft is forecast to spend $80 billion on AI in 2025.

No problem. To break even, they can raise prices just a little bit, from one cent per message to, uuh, $318 per message. I don't think that such a tiny price bump is going to reduce usage or scare away any customers, so they can just do that.

[-] nightsky@awful.systems 30 points 8 months ago

From McCarthy's reply:

My current answer to the question of when machines will reach human-level intelligence is that a precise calculation shows that we are between 1.7 and 3.1 Einsteins and .3 Manhattan Projects away from the goal.

omg this statement sounds 100% like something that could be posted today by Sam Altman on X. It's hititing exactly the sweet spot between appearing precise but also super vague, like Altman's "a few thousand days".

[-] nightsky@awful.systems 28 points 1 year ago

"Shortly after 2027" is a fun phrasing. Means "not before 2028", but mentioning "2027" so it doesn't seem so far away.

I interpret it as "please bro, keep the bubble going bro, just 3 more years bro, this time for real bro"

[-] nightsky@awful.systems 28 points 1 year ago

Or they’ll be “AGI” — A Guy Instead.

Lol. This is perfect. Can we please adopt this everywhere.

As for the OpenAI statement... it's interesting how it starts with "We are now confident [...]" to make people think "ooh now comes the real stuff"... but then it quickly makes a sharp turn towards weasel words: "We believe that [...] we may see [...]" . I guess the idea is that the confidence from the first part is supposed to carry over to the second, while retaining a way to later say "look, we didn't promise anything for 2025". But then again, maybe I'm ascribing too much thoughtfulness here, when actually they just throw out random bullshit, just like their "AI".

view more: next ›

nightsky

0 post score
0 comment score
joined 1 year ago