883
AGI achieved 🤖 (lemmy.dbzer0.com)
submitted 2 days ago by [email protected] to c/[email protected]
you are viewing a single comment's thread
view the rest of the comments
[-] [email protected] 45 points 2 days ago* (last edited 2 days ago)

LLM wasn’t made for this

There's a thought experiment that challenges the concept of cognition, called The Chinese Room. What it essentially postulates is a conversation between two people, one of whom is speaking Chinese and getting responses in Chinese. And the first speaker wonders "Does my conversation partner really understand what I'm saying or am I just getting elaborate stock answers from a big library of pre-defined replies?"

The LLM is literally a Chinese Room. And one way we can know this is through these interactions. The machine isn't analyzing the fundamental meaning of what I'm saying, it is simply mapping the words I've input onto a big catalog of responses and giving me a standard output. In this case, the problem the machine is running into is a legacy meme about people miscounting the number of "r"s in the word Strawberry. So "2" is the stock response it knows via the meme reference, even though a much simpler and dumber machine that was designed to handle this basic input question could have come up with the answer faster and more accurately.

When you hear people complain about how the LLM "wasn't made for this", what they're really complaining about is their own shitty methodology. They build a glorified card catalog. A device that can only take inputs, feed them through a massive library of responses, and sift out the highest probability answer without actually knowing what the inputs or outputs signify cognitively.

Even if you want to argue that having a natural language search engine is useful (damn, wish we had a tool that did exactly this back in August of 1996, amirite?), the implementation of the current iteration of these tools is dogshit because the developers did a dogshit job of sanitizing and rationalizing their library of data. Also, incidentally, why Deepseek was running laps around OpenAI and Gemini as of last year.

Imagine asking a librarian "What was happening in Los Angeles in the Summer of 1989?" and that person fetching you back a stack of history textbooks, a stack of Sci-Fi screenplays, a stack of regional newspapers, and a stack of Iron-Man comic books all given equal weight? Imagine hearing the plot of the Terminator and Escape from LA intercut with local elections and the Loma Prieta earthquake.

That's modern LLMs in a nutshell.

[-] [email protected] 1 points 1 day ago

Imagine asking a librarian "What was happening in Los Angeles in the Summer of 1989?" and that person fetching you ... That's modern LLMs in a nutshell.

I agree, but I think you're still being too generous to LLMs. A librarian who fetched all those things would at least understand the question. An LLM is just trying to generate words that might logically follow the words you used.

IMO, one of the key ideas with the Chinese Room is that there's an assumption that the computer / book in the Chinese Room experiment has infinite capacity in some way. So, no matter what symbols are passed to it, it can come up with an appropriate response. But, obviously, while LLMs are incredibly huge, they can never be infinite. As a result, they can often be "fooled" when they're given input that semantically similar to a meme, joke or logic puzzle. The vast majority of the training data that matches the input is the meme, or joke, or logic puzzle. LLMs can't reason so they can't distinguish between "this is just a rephrasing of that meme" and "this is similar to that meme but distinct in an important way".

[-] [email protected] 0 points 1 day ago

Can you explain the difference between understanding the question and generating the words that might logically follow? I'm aware that it's essentially a more powerful version of how auto-correct works, but why should we assume that shows some lack of understanding at a deep level somehow?

[-] [email protected] 1 points 23 hours ago

Can you explain the difference between understanding the question and generating the words that might logically follow?

I mean, it's pretty obvious. Take someone like Rowan Atkinson whose death has been misreported multiple times. If you ask a computer system "Is Rowan Atkinson Dead?" you want it to understand the question and give you a yes/no response based on actual facts in its database. A well designed program would know to prioritize recent reports as being more authoritative than older ones. It would know which sources to trust, and which not to trust.

An LLM will just generate text that is statistically likely to follow the question. Because there have been many hoaxes about his death, it might use that as a basis and generate a response indicating he's dead. But, because those hoaxes have also been debunked many times, it might use that as a basis instead and generate a response indicating that he's alive.

So, if he really did just die and it was reported in reliable fact-checked news sources, the LLM might say "No, Rowan Atkinson is alive, his death was reported via a viral video, but that video was a hoax."

but why should we assume that shows some lack of understanding

Because we know what "understanding" is, and that it isn't simply finding words that are likely to appear following the chain of words up to that point.

[-] [email protected] 1 points 11 hours ago

Just if you were a hater that would be cool with me. I don't like "ai" either. The explanations you give are misleading at best. It's embarrassing. You fail to realise the fact that NOBODY KNOWS why or how they work. It's just extreme folly to pretend you know these things. It's been observed to reason novel ideas which is why it is confusing for scientists that work with them why it happens. It's not just data lookup. You think entire Web and history of man fits in 8 gb? You are just educating people with just your basic rage filled opinion, not actual answers. You are angry at the discovery, we get that. You don't believe in it. Ok. But don't say you know what it does and how, or what openai does behind its closed doors. It's just embarrassing. We are working on papers to try to explain the emergent phenomenon we discovered in neural nets that make it seem like it can reason and output mostly correct answers to difficult questions. It's not in the "data" and it looks for it. You could just start learning if you want to be an educator in the field.

[-] [email protected] 1 points 19 hours ago

The Rowan Atkinson thing isn't misunderstanding, it's understanding but having been misled. I've literally done this exact thing myself, say something was a hoax (because in the past it was) but then it turned out there was newer info I didn't know about. I'm not convinced LLMs as they exist today don't prioritize sources -- if trained naively, sure, but these days they can, for instance, integrate search results, and can update on new information. If the LLM can answer correctly only after checking a web search, and I can do the same only after checking a web search, that's a score of 1-1.

because we know what "understanding" is

Really? Who claims to know what understanding is? Do you think it's possible there can ever be an AI (even if different from an LLM) which is capable of "understanding?" How can you tell?

[-] [email protected] 2 points 13 hours ago

I’m not convinced LLMs as they exist today don’t prioritize sources – if trained naively, sure, but these days they can, for instance, integrate search results, and can update on new information.

Well, it includes the text from the search results in the prompt, it's not actually updating any internal state (the network weights), a new "conversation" starts from scratch.

[-] [email protected] 1 points 8 hours ago

That's not true for the commercial ai's. We don't know what they are doing

[-] [email protected] 1 points 11 hours ago

Yes that's right, LLMs are context-free. They don't have internal state. When I say "update on new information" I really mean "when new information is available in its context window, its response takes that into account."

load more comments (8 replies)
load more comments (8 replies)
load more comments (54 replies)
this post was submitted on 11 Jun 2025
883 points (98.7% liked)

Lemmy Shitpost

32308 readers
3402 users here now

Welcome to Lemmy Shitpost. Here you can shitpost to your hearts content.

Anything and everything goes. Memes, Jokes, Vents and Banter. Though we still have to comply with lemmy.world instance rules. So behave!


Rules:

1. Be Respectful


Refrain from using harmful language pertaining to a protected characteristic: e.g. race, gender, sexuality, disability or religion.

Refrain from being argumentative when responding or commenting to posts/replies. Personal attacks are not welcome here.

...


2. No Illegal Content


Content that violates the law. Any post/comment found to be in breach of common law will be removed and given to the authorities if required.

That means:

-No promoting violence/threats against any individuals

-No CSA content or Revenge Porn

-No sharing private/personal information (Doxxing)

...


3. No Spam


Posting the same post, no matter the intent is against the rules.

-If you have posted content, please refrain from re-posting said content within this community.

-Do not spam posts with intent to harass, annoy, bully, advertise, scam or harm this community.

-No posting Scams/Advertisements/Phishing Links/IP Grabbers

-No Bots, Bots will be banned from the community.

...


4. No Porn/ExplicitContent


-Do not post explicit content. Lemmy.World is not the instance for NSFW content.

-Do not post Gore or Shock Content.

...


5. No Enciting Harassment,Brigading, Doxxing or Witch Hunts


-Do not Brigade other Communities

-No calls to action against other communities/users within Lemmy or outside of Lemmy.

-No Witch Hunts against users/communities.

-No content that harasses members within or outside of the community.

...


6. NSFW should be behind NSFW tags.


-Content that is NSFW should be behind NSFW tags.

-Content that might be distressing should be kept behind NSFW tags.

...

If you see content that is a breach of the rules, please flag and report the comment and a moderator will take action where they can.


Also check out:

Partnered Communities:

1.Memes

2.Lemmy Review

3.Mildly Infuriating

4.Lemmy Be Wholesome

5.No Stupid Questions

6.You Should Know

7.Comedy Heaven

8.Credible Defense

9.Ten Forward

10.LinuxMemes (Linux themed memes)


Reach out to

All communities included on the sidebar are to be made in compliance with the instance rules. Striker

founded 2 years ago
MODERATORS