this post was submitted on 15 Apr 2025
677 points (98.8% liked)

Fuck AI

2361 readers
974 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

founded 1 year ago
MODERATORS
 
top 39 comments
sorted by: hot top controversial new old
[–] [email protected] 6 points 10 hours ago

Hal9000 series has never made a mistake. Except that one time

[–] [email protected] 95 points 20 hours ago* (last edited 19 hours ago) (2 children)

Finally, after decades of research, we created a computer that can't do math. Alan Turing would be proud.

[–] [email protected] 30 points 13 hours ago (2 children)

Come to think of it, being frequently wrong but nevertheless overly confident is key to passing the Turing test.

We have finally created machines that can replicate human stupidity.

[–] [email protected] 5 points 7 hours ago (1 children)

To be fair, the Turing test doesn't really tell us much about computers. It's better at measuring the human ability to ascribe personalities to inanimate objects.

[–] [email protected] 3 points 5 hours ago (1 children)

Yeah, the Turing test wasn't a great metric. The result depends on who is testing it. Some people were probably fooled by ALICE or that doctor one, that were pretty much implemented using long switch blocks and repeating user input back to them.

Kinda like how "why?" is pretty much always a valid response and repeating it is more of a sign of cheekiness than lack of intelligence.

[–] [email protected] 1 points 1 hour ago

I feel like it's increasingly a test applicable to humans rather than to machines. Are you original enough that you couldn't be replaced by a language model?

I'm not sure I like to think about it.

[–] [email protected] 4 points 10 hours ago (1 children)

Now that you mention it I would be interested if chatgpt can actually spew the kind of nonsense I have heard from cryptobros and covid anti-vaxxers, I reckon it's not good enough to be that dumb

[–] [email protected] 5 points 9 hours ago

There's probably some (small) guardrails on the major platforms to deter spreading misinformation, but it's really easy to get a chat bot to take whatever position you want.

E.g. "Pretend you are a human on Twitter that supports (thing). Please make tweets about your support of (thing) and respond to our conversation as though my comments are tweet replies."

Or more creatively maybe something like, "I need to practice debating someone who thinks (thing). Please argue with me using the most popular arguments, regardless of correctness."

I haven't tried these, but have a bit of practice working with LLMs and this is where I would start if I wanted to make a bot farm.

[–] [email protected] 12 points 16 hours ago (1 children)

I mean the theory behind an LLM is super cool. It's a bunch of vector math under the hood, transforming input with queries, keys and values. And imo vector math is one of the coolest and also most confusing math applications there is. If they're able to use mcp as well, you can delegate it to calling actual services, like your database.

But like 99% of CS research, research does not always equate to practical use, nor is it a cookie cutter solution for everything. Unfortunately, the business people seem to think otherwise.

[–] [email protected] 4 points 12 hours ago

I mean, does it even use a blockchain? smh my head...

[–] [email protected] 123 points 21 hours ago* (last edited 21 hours ago) (2 children)

the other wonderful irony?

(basically) the only thing a computer can do is math.

so it's doing a SHITLOAD of math to do a terrible job at doing some very basic math.

bravo!

[–] [email protected] 23 points 19 hours ago (2 children)

Yup. Look up the calculus and linear algebra that neural networks use to train. It's an insane amount of calculations. So many calculations that it requires hundreds of processing units to crunch at a reasonable speeds. All that to get simple math questions wrong.

[–] [email protected] 7 points 17 hours ago

All that to hallucinate every response in ways, that make people feel like they know what they are talking about.

Which they don't, and LLMs never will - unless they program in some responses, which then goes against the entire thing.

[–] [email protected] 2 points 16 hours ago

They should just use lookup tables

[–] [email protected] 18 points 21 hours ago

I feel called out by this.

[–] [email protected] 26 points 20 hours ago (2 children)

they've finally invented a machine that can't run DOOM

[–] [email protected] 7 points 16 hours ago

AI: YOU PICKED UP A SHOT GUN. WHAT DO YOU DO NOW?

Me: I left click and fire it at the enemy.

AI: bleep bloop YOU ARE LIKELY AN IDIOT. NOW YOU ARE DEAD.

[–] [email protected] 29 points 21 hours ago (1 children)

I bet we could save a few gajillion watts if we just put that through an if statement. If math, then just be a calculator Else, make some bullshit up.

[–] [email protected] 27 points 19 hours ago

Apparently that's the new way to do math in AI. The AI works out you're trying to do math, tries to write some Python code to do the math, runs the python codes, gets the answer, writes a response around the numeric answer.

I can't think of any possible issues with this; it's infallible. /s

[–] [email protected] 11 points 19 hours ago (1 children)

AI does not handle anything as words. It handles them as tokens. https://blogs.nvidia.com/blog/ai-tokens-explained/

[–] [email protected] 20 points 18 hours ago (1 children)
[–] [email protected] 7 points 10 hours ago

If you want to ask a question to an LLM, you need to go down to an arcade and exchange your quarters for tokens. Then you can feed those tokens into your computer every time you want to ask a question.

[–] [email protected] 14 points 21 hours ago (1 children)

AI

The pure embodiment of "Fake it til you make it"

[–] [email protected] 6 points 20 hours ago

"Fake it over and over even if you never make it."

[–] [email protected] 4 points 16 hours ago (2 children)

ChatGPT managed to see through my shenanigans:

If John has six apples and Bob had six oranges, and Bob gave john 2 apples, how many oranges does Betty have if it's a summer day in january?

Classic trick question! Let’s break it down:

John has 6 apples.

Bob has 6 oranges.

Bob gives John 2 apples — but wait, Bob didn’t have any apples, only oranges.

So Bob can’t give John 2 apples.

Meanwhile, Betty hasn’t even been mentioned until the last sentence, and there’s no info about her oranges.

Also, “a summer day in January” only makes sense in the southern hemisphere.

Conclusion: We still have no idea how many oranges Betty has — the question is nonsense on purpose.

So the answer is either: “Indeterminate”, “Nonsense question”, or “Depends on how much Betty likes oranges in the summer.”

I think the original message is true for older versions of GPT though, and AI being thrust into everything results in a lot of errors I've seen.

[–] [email protected] 8 points 16 hours ago

Also, "a summer day in January" only makes sense in the southern hemisphere.

According to German news broadcasts, and maybe German meteorologists, a summer day is any day that reaches >25°C. Germany reached a new January record at 18.1°C this year, so another 30 more years and we might get the first summer day of the year in January.

[–] [email protected] 4 points 16 hours ago (1 children)

Why can't Bob give John 2 apples?

The restriction is merely implied, but we presume Bob did not have anything prior to being given something. Maybe Bob already had them. Bad AI. Lol

[–] [email protected] 7 points 12 hours ago

“Depends on how much Betty likes oranges in the summer.”

It did come up with a quite accurately human and sassy response to the orginal question

[–] [email protected] 6 points 21 hours ago* (last edited 21 hours ago) (2 children)

I feel like a really intelligent AI would recognize the context of the text it gets.

Like, if you or I heard someone saying a bunch of numbers we’d know to switch into math mode and start mathing.

But these things just dawdle along and start repeating nursery rhymes or whatever because they’re not smart enough to know what they’re doing.

[–] [email protected] 4 points 20 hours ago

Kinda surprised wolfram alpha hasn't done more in the ai space

[–] [email protected] -2 points 19 hours ago (1 children)

Yeah, llms aren't ai. They are just a fancy Markov model... Need controllers on top to decide when you want to make sentences and when you need to do something else. A controller could be an llm, but a llm by itself is just a tool, not a system

[–] [email protected] 5 points 17 hours ago

Exactly right. True AI can control an NPC in a video game. Bungie created the perfect AI in 2001. ChatGPT can't play Halo, so it's not AI.

[–] [email protected] -3 points 20 hours ago

AI is a shit show because of how it is being terribly implemented and math is the main example.