Hal9000 series has never made a mistake. Except that one time
Fuck AI
"We did it, Patrick! We made a technological breakthrough!"
A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.
Finally, after decades of research, we created a computer that can't do math. Alan Turing would be proud.
Come to think of it, being frequently wrong but nevertheless overly confident is key to passing the Turing test.
We have finally created machines that can replicate human stupidity.
To be fair, the Turing test doesn't really tell us much about computers. It's better at measuring the human ability to ascribe personalities to inanimate objects.
Yeah, the Turing test wasn't a great metric. The result depends on who is testing it. Some people were probably fooled by ALICE or that doctor one, that were pretty much implemented using long switch blocks and repeating user input back to them.
Kinda like how "why?" is pretty much always a valid response and repeating it is more of a sign of cheekiness than lack of intelligence.
I feel like it's increasingly a test applicable to humans rather than to machines. Are you original enough that you couldn't be replaced by a language model?
I'm not sure I like to think about it.
Now that you mention it I would be interested if chatgpt can actually spew the kind of nonsense I have heard from cryptobros and covid anti-vaxxers, I reckon it's not good enough to be that dumb
There's probably some (small) guardrails on the major platforms to deter spreading misinformation, but it's really easy to get a chat bot to take whatever position you want.
E.g. "Pretend you are a human on Twitter that supports (thing). Please make tweets about your support of (thing) and respond to our conversation as though my comments are tweet replies."
Or more creatively maybe something like, "I need to practice debating someone who thinks (thing). Please argue with me using the most popular arguments, regardless of correctness."
I haven't tried these, but have a bit of practice working with LLMs and this is where I would start if I wanted to make a bot farm.
I mean the theory behind an LLM is super cool. It's a bunch of vector math under the hood, transforming input with queries, keys and values. And imo vector math is one of the coolest and also most confusing math applications there is. If they're able to use mcp as well, you can delegate it to calling actual services, like your database.
But like 99% of CS research, research does not always equate to practical use, nor is it a cookie cutter solution for everything. Unfortunately, the business people seem to think otherwise.
I mean, does it even use a blockchain? smh my head...
the other wonderful irony?
(basically) the only thing a computer can do is math.
so it's doing a SHITLOAD of math to do a terrible job at doing some very basic math.
bravo!
Yup. Look up the calculus and linear algebra that neural networks use to train. It's an insane amount of calculations. So many calculations that it requires hundreds of processing units to crunch at a reasonable speeds. All that to get simple math questions wrong.
All that to hallucinate every response in ways, that make people feel like they know what they are talking about.
Which they don't, and LLMs never will - unless they program in some responses, which then goes against the entire thing.
They should just use lookup tables
I feel called out by this.
they've finally invented a machine that can't run DOOM
It can infact pretend to run doom.
AI: YOU PICKED UP A SHOT GUN. WHAT DO YOU DO NOW?
Me: I left click and fire it at the enemy.
AI: bleep bloop YOU ARE LIKELY AN IDIOT. NOW YOU ARE DEAD.
I bet we could save a few gajillion watts if we just put that through an if statement. If math, then just be a calculator Else, make some bullshit up.
Apparently that's the new way to do math in AI. The AI works out you're trying to do math, tries to write some Python code to do the math, runs the python codes, gets the answer, writes a response around the numeric answer.
I can't think of any possible issues with this; it's infallible. /s
AI does not handle anything as words. It handles them as tokens. https://blogs.nvidia.com/blog/ai-tokens-explained/
Laymen terms.
If you want to ask a question to an LLM, you need to go down to an arcade and exchange your quarters for tokens. Then you can feed those tokens into your computer every time you want to ask a question.
AI
The pure embodiment of "Fake it til you make it"
"Fake it over and over even if you never make it."
ChatGPT managed to see through my shenanigans:
If John has six apples and Bob had six oranges, and Bob gave john 2 apples, how many oranges does Betty have if it's a summer day in january?
Classic trick question! Let’s break it down:
John has 6 apples.
Bob has 6 oranges.
Bob gives John 2 apples — but wait, Bob didn’t have any apples, only oranges.
So Bob can’t give John 2 apples.
Meanwhile, Betty hasn’t even been mentioned until the last sentence, and there’s no info about her oranges.
Also, “a summer day in January” only makes sense in the southern hemisphere.
Conclusion: We still have no idea how many oranges Betty has — the question is nonsense on purpose.
So the answer is either: “Indeterminate”, “Nonsense question”, or “Depends on how much Betty likes oranges in the summer.”
I think the original message is true for older versions of GPT though, and AI being thrust into everything results in a lot of errors I've seen.
Also, "a summer day in January" only makes sense in the southern hemisphere.
According to German news broadcasts, and maybe German meteorologists, a summer day is any day that reaches >25°C. Germany reached a new January record at 18.1°C this year, so another 30 more years and we might get the first summer day of the year in January.
Why can't Bob give John 2 apples?
The restriction is merely implied, but we presume Bob did not have anything prior to being given something. Maybe Bob already had them. Bad AI. Lol
“Depends on how much Betty likes oranges in the summer.”
It did come up with a quite accurately human and sassy response to the orginal question
I feel like a really intelligent AI would recognize the context of the text it gets.
Like, if you or I heard someone saying a bunch of numbers we’d know to switch into math mode and start mathing.
But these things just dawdle along and start repeating nursery rhymes or whatever because they’re not smart enough to know what they’re doing.
Kinda surprised wolfram alpha hasn't done more in the ai space
Yeah, llms aren't ai. They are just a fancy Markov model... Need controllers on top to decide when you want to make sentences and when you need to do something else. A controller could be an llm, but a llm by itself is just a tool, not a system
Exactly right. True AI can control an NPC in a video game. Bungie created the perfect AI in 2001. ChatGPT can't play Halo, so it's not AI.
AI is a shit show because of how it is being terribly implemented and math is the main example.