Lmao so many people telling on themselves in that thread. “I don’t get it, I regularly poison open source projects with LLM code!”
“Oh man, this brain fog I have sure makes it hard to think. Guess I’ll use my trusty LLM! ChatGPT says lead paint is tastier and better for your brain than COVID? Don’t mind if I do!”
OK I sped read that thing earlier today, and am now reading it proper.
The best answer — AI has “jagged intelligence” — lies in between hype and skepticism.
Here's how they describe this term, about 2000 words in:
Researchers have come up with a buzzy term to describe this pattern of reasoning: “jagged intelligence." [...] Picture it like this. If human intelligence looks like a cloud with softly rounded edges, artificial intelligence is like a spiky cloud with giant peaks and valleys right next to each other. In humans, a lot of problem-solving capabilities are highly correlated with each other, but AI can be great at one thing and ridiculously bad at another thing that (to us) doesn’t seem far apart.
So basically, this term is just pure hype, designed to play up the "intelligence" part of it, to suggest that "AI can be great". The article just boils down to "use AI for the things that we think it's good at, and don't use it for the things we think it's bad at!" As they say on the internet, completely unserious.
The big story is: AI companies now claim that their models are capable of genuine reasoning — the type of thinking you and I do when we want to solve a problem. And the big question is: Is that true?
Demonstrably no.
These models are yielding some very impressive results. They can solve tricky logic puzzles, ace math tests, and write flawless code on the first try.
Fuck right off.
Yet they also fail spectacularly on really easy problems. AI experts are torn over how to interpret this. Skeptics take it as evidence that “reasoning” models aren’t really reasoning at all.
Ah, yes, as we all know, the burden of proof lies on skeptics.
Believers insist that the models genuinely are doing some reasoning, and though it may not currently be as flexible as a human’s reasoning, it’s well on its way to getting there. So, who’s right?
Again, fuck off.
Moving on...
The skeptic's case
vs
The believer’s case
A LW-level analysis shows that the article spends 650 words on the skeptic's case and 889 on the believer's case. BIAS!!!!! /s.
Anyway, here are the skeptics quoted:
- Shannon Vallor, "a philosopher of technology at the University of Edinburgh"
- Melanie Mitchell, "a professor at the Santa Fe Institute"
Great, now the believers:
- Ryan Greenblatt, "chief scientist at Redwood Research"
- Ajeya Cotra, "a senior analyst at Open Philanthropy"
You will never guess which two of these four are regular wrongers.
Note that the article only really has examples of the dumbass-nature of LLMs. All the smart things it reportedly does is anecdotal, i.e. the author just says shit like "AI can do solve some really complex problems!" Yet, it still has the gall to both-sides this and suggest we've boiled the oceans for something more than a simulated idiot.
it's almost as if international trade of weapons creates perverse incentives
Martial law and free speech restrictions are only bad if the bad guys do it, also we get to decide who the bad guys are
Ok so I read the thread as translated by google. Some notes:
- System was set up to also use some image recognition so it could filter for some classic incel-type shit, like:
- believers
- zodiac sign written
- doesn’t work
- show breasts in photos
- photos with flowers
- His entire correspondence with these women was done by chatgpt. including making dates and promising gifts for those dates. He later gives gpt calendar access to avoid a two dates to the prom situation.
- Did he continue using gpt to talk to his fiance? yes. Did he feign responsiveness in his texting with gpt? also yes. When she started talking about going to weddings, did it generate a marriage proposal out of the blue, and prompt him as to whether or not the message should be sent? also yes.
just... fuck.
Bean Dad but instead of a can opener it’s swimming/not drowning
What is great is that it only really starts approaching correct once you tell it to essentially copy paste from wikipedia.
Also, if some rando approached me on the street and showed me the wikipedia article for dijkstra’s and asked for me to help explain it, my first-ass instinct would be to check if there was a simple english version of the article, and go from there.
Disclaimer: I glossed over said SE article just now. It might not be a great explanation or even correct, but hey, it already exists and didn’t require 1.21 Jiggowatts to get there.
This journo: "Hmm, a guy that was deeply in this community says it's a fucked up shithole with bad politics. I'm going to ignore that aspect of it and just uncritically platform them wholesale"
Counterpoint: it’s not not a dating app
Sociological Claim: the extent to which a prominence-weighted sample of the rationalist community has refused to credit the Empirical or Philosophical Claims even when presented with strong arguments and evidence is a reason to distrust the community’s collective sanity.
Zack my guy you are so fucking close. Also just fucking leave.
swlabr
0 post score0 comment score
Does Yud predate for food or sport?