929
Apophenia (thelemmy.club)
submitted 3 days ago* (last edited 3 days ago) by ideonek@piefed.social to c/fuck_ai@lemmy.world
you are viewing a single comment's thread
view the rest of the comments
[-] CannonFodder@lemmy.world 0 points 2 days ago

There's no reason to think that thought and analysis that you perceive isn't based on such complex historical weighted averages in you brain. In fact, since we do know the basic fundamentals of how brains work, it would seem that's exactly what's happening.
What's funny is people thinking their brain is anything magically different than an organic computer.

[-] Catoblepas@piefed.blahaj.zone 11 points 2 days ago

In fact, since we do know the basic fundamentals of how brains work, it would seem that’s exactly what’s happening.

I encourage you to try to find and cite any reputable neuroscientist that believes we can even quantify what thought is, much less believes both A) we ‘know the basic fundamentals of how brains work’ and B) it’s just like an LLM.

Your argument isn’t a line of reasoning invented by neuroscientists, it’s one invented by people who need to sell more AI processors. I know which group I think has a better handle on the brain.

[-] CannonFodder@lemmy.world -2 points 2 days ago

I never said it's directly like an Ilm. That's a very specific form. The brain has many different structures - and the neural interconnections we can map have been shown to be a form of convolution in much the same way that many ai systems use (not by coincidence). Scientists generally avoid metaphysics like subjects of consciousness because it's inherently unprovable. We can look at the results of processing/thought and quantify the complexity and accuracy. We do this for children at various ages and can see how they learn to think in increasing complexity. We can do this for ai systems too. The leaps that we've seen over the last few years as computational power of computers has reached some threshold, show emergent abilities that only a decade ago were thought to be impossible. Since we can never know anyone else's experience, we can only go on input/output. And so if it looks like intelligence, then it is intelligence. Then the concept of 'thought' in this context is only semantics. There is, so far, nothing to suggest that magic is needed for our brains to think; it's just a physical process - so as we add more complexity and different structures to ai systems, there's no reason to think we can't make them do the same as our brains, or more.

[-] Catoblepas@piefed.blahaj.zone 8 points 2 days ago

show emergent abilities

Immediate mark of a someone who is deceiving or has been deceived.

And so if it looks like intelligence, then it is intelligence

Wow, you mean I can understand Chinese?

[-] CannonFodder@lemmy.world 0 points 1 day ago

If you don't see the new things that computers can do with ai, then you are being purposely ignorant. There's tons of slop, along with useful capabilities; but even that slop generation is clearly a new ability computers didn't have before.

And yes, if you can process written Chinese fully and respond to it, you do understand it.

[-] Catoblepas@piefed.blahaj.zone 2 points 1 day ago

And yes, if you can process written Chinese fully and respond to it, you do understand it.

Understanding is when you follow instructions without any comprehension, got it 👍

[-] CannonFodder@lemmy.world 0 points 1 day ago

You have to understand instructions on some level to be able to follow them. 👍🏻

[-] theQuickBrownFox@lemmy.today 7 points 2 days ago

what's your point? do you believe that llms actually understand their own output?

[-] CannonFodder@lemmy.world -1 points 1 day ago

That's a difficult question. The semantics of 'understand' and the metaphysics of how that might apply is rather unclear to me. LLMs have a certain consistent modeling which agrees with their output, so that's the same as human's thought which I think we'd agree is 'understanding'; but feeding 1+1 into a calculator will also consistently get the same result. Is that understanding? In some respects it is, the math is fully represented by the inner workings of the calculator. It doesn't feel to us like real understanding because it's trivial and very causal. I think that's just because the problem is so simple. So what we end up with is that assuming an ai is reasonably correct, whether it is really understanding is more a basis of the complexity it handles. And the complexity of human thought is much higher than current ai systems partly because we always hold all sorts of other thoughts and memories that can be independent of a particular task, but are combined at some level.
So, in a way the llm construct understands its limited mapping of a problem. But even though it's using the same input /output language as humans do, current llms don't understand things at anywhere near the level that humans do.

[-] athatet@lemmy.zip 6 points 1 day ago

It’s not a difficult question.

LLMs do not understand things.

[-] CannonFodder@lemmy.world -1 points 1 day ago

If you're going to define it that way, then obviously that's how it is. But do you really understand what understanding is?

[-] jaredwhite@humansare.social 7 points 2 days ago

Citation needed.

this post was submitted on 04 Feb 2026
929 points (98.3% liked)

Fuck AI

5654 readers
1328 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

AI, in this case, refers to LLMs, GPT technology, and anything listed as "AI" meant to increase market valuations.

founded 2 years ago
MODERATORS