this post was submitted on 17 Mar 2025
1354 points (99.8% liked)
Programmer Humor
34449 readers
246 users here now
Post funny things about programming here! (Or just rant about your favourite programming language.)
Rules:
- Posts must be relevant to programming, programmers, or computer science.
- No NSFW content.
- Jokes must be in good taste. No hate speech, bigotry, etc.
founded 5 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
We don't need leaps and bounds, from here. We're already in science fiction territory. Incremental improvement has silenced a wide variety of naysaying.
And this is with LLMs - which are stupid. We didn't design them with logic units or factoid databases. Anything they get right is an emergent property from guessing plausible words, and they get a shocking amount of things right. Smaller models and faster training will encourage experimentation for better fundamental goals. Like a model that can only say yes, no, or mu. A decade ago that would have been an impossible sell - but now we know data alone can produce a network that'll fake its way through explaining why the answer is yes or no. If we're only interested in the accuracy of that answer, then we're wasting effort on the quality of the faking.
Even with this level of intelligence, where people still bicker about whether it is any level of intelligence, dumb tricks keep working. Like telling the model to think out loud. Or having it check its work. These are solutions an author would propose as comedy. And yet: it helps. It narrows the gap between "but right now it sucks at [blank]" and having to find a new [blank]. If that never lets it do math properly, well, buy a calculator.
I’m not saying they don’t have applications. But the idea of them being a one size fits all solution to everything is something being sold to VC investors and shareholders.
As you say - the issue is accuracy. And, as you also say - that’s not what these things do, and instead they make predictions about what comes next and present that confidently. Hallucinations aren’t errors, they’re what they were built to do.
If you want something which can set an alarm for you or find search results then something that responds to set inputs correctly 100% of the time is better than something more natural-seeming which is right 99%of the time.
Maybe along the line there will be a new approach, but what is currently branded as AI is never going to be what it’s being sold as.
If you want something more complex than an alarm clock, this does kinda work for anything. Emphasis on "kinda."
Neural networks are universal approximators. People get hung-up on the approximation part, like that cancels out the potential in... universal. You can make a model that does any damn thing. Only recently has that seriously meant you and can - backpropagation works, and it works on video-game hardware.
"AI is whatever hasn't been done yet" has been the punchline for decades. For any advancement in the field, people only notice once you tell them it's related to AI, and then they just call it "AI," and later complain that it's not like on Star Trek.
And yet it moves. Each advancement makes new things possible, and old things better. Being right most of the time is good, actually. 100% would be better than 99%, but the 100% version does not exist, so 99% is better than never.
Telling the grifters where to shove it should not condemn the cool shit they're lying about.