[-] [email protected] 1 points 2 months ago

We should be trying to stop this from coming to pass with the urgency we would try to stop a killer asteroid from striking Earth. Why aren’t we?

Wait, what are we trying to stop from coming to pass? Superintelligent AIs? Either I'm missing his point, or he really agrees with the doomers that LLMs are on their way to becoming "superintelligent".

[-] [email protected] 2 points 3 months ago

As a fellow Usenet junkie from way back, now I'm curious which newsgroups Yarvin hung out in.

[-] [email protected] 3 points 6 months ago

One thing to keep in mind about Ptacek is that he will die on the stupidest of hills. Back when Y Combinator president Garry Tan tweeted that members of the San Francisco board of supervisors should be killed, Ptacek defended him to the extent that the mouth-breathers on HN even turned on him.

[-] [email protected] 1 points 2 years ago

I like the way he thinks the lack of punctuation in his "joke" is the tell that it's a joke.

He's also apparently never heard the aphorism that if you have to explain the joke, it's probably not that funny.

[-] [email protected] 1 points 2 years ago

Is it wrong to hope they manage to realize one of these libertarian paradise fantasies? I'd really love to see how quickly it devolves into a Mad Max Thunderdome situation.

[-] [email protected] 1 points 2 years ago

Stephen Jay Gould's The Mismeasure of Man is always a good place to start.

[-] [email protected] 2 points 2 years ago* (last edited 2 years ago)

This is good:

Take the sequence {1,2,3,4,x}. What should x be? Only someone who is clueless about induction would answer 5 as if it were the only answer (see Goodman’s problem in a philosophy textbook or ask your closest Fat Tony) [Note: We can also apply here Wittgenstein’s rule-following problem, which states that any of an infinite number of functions is compatible with any finite sequence. Source: Paul Bogossian]. Not only clueless, but obedient enough to want to think in a certain way.

Also this:

If, as psychologists show, MDs and academics tend to have a higher “IQ” that is slightly informative (higher, but on a noisy average), it is largely because to get into schools you need to score on a test similar to “IQ”. The mere presence of such a filter increases the visible mean and lower the visible variance. Probability and statistics confuse fools.

And:

If someone came up w/a numerical“Well Being Quotient” WBQ or “Sleep Quotient”, SQ, trying to mimic temperature or a physical quantity, you’d find it absurd. But put enough academics w/physics envy and race hatred on it and it will become an official measure.

[-] [email protected] 1 points 2 years ago* (last edited 2 years ago)

Now that his alter ego has been exposed, Hanania is falling back on the "stupid things I said my youth" chestnut. Here's a good response to that.

[-] [email protected] 1 points 2 years ago* (last edited 2 years ago)

In theory, a prediction market can work. The idea is that even though there are a lot of uninformed people making bets, their bad predictions tend to cancel each other out, while the subgroup of experts within that crowd will converge on a good prediction. The problem is that prediction markets only work when they're ideal. As soon as the bettor pool becomes skewed by a biased subpopulation, they stop working. And that's exactly what happens with the rationalist crowd. The main benefit rationalists obtain from prediction markets and wagers is an unfounded confidence that their ideaas have merit. Prediction markets also have a long history in libertarian circles, which probably also helps explain why rationalists are so keen on them.

[-] [email protected] 1 points 2 years ago

"TempleOS on the blockchain"

Ok that's some quality sneer. A bit obscure and esoteric, but otherwise perfect for those who know anything about Temple OS.

view more: ‹ prev next ›

TinyTimmyTokyo

0 post score
0 comment score
joined 2 years ago