9
submitted 5 days ago by [email protected] to c/[email protected]

Apple published a paper criticizing the capabilities of Large Language Models (LLMs) in reasoning and formal logic. The paper builds on previous arguments made by Gary Marcus and Subbarao Kambhampati about LLMs' limitations in generalizing beyond their training distribution.

The authors of the paper demonstrated that even the latest "reasoning models" fail to reason reliably on classic problems like the Tower of Hanoi. LLMs cannot solve the Tower of Hanoi reliably, even with the solution algorithm given to them.

The paper argues that LLMs are not a substitute for well-specified conventional algorithms and have limitations that are becoming clearer. LLMs are not a direct route to AGI and while the field of neural networks is not dead, current approach has clear limitations.

The paper highlights the importance of combining human adaptiveness with computational brute force and reliability in AI development.

no comments (yet)
sorted by: hot top new old
there doesn't seem to be anything here
this post was submitted on 09 Jun 2025
9 points (100.0% liked)

Technology

38455 readers
35 users here now

This is the official technology community of Lemmy.ml for all news related to creation and use of technology, and to facilitate civil, meaningful discussion around it.


Ask in DM before posting product reviews or ads. All such posts otherwise are subject to removal.


Rules:

1: All Lemmy rules apply

2: Do not post low effort posts

3: NEVER post naziped*gore stuff

4: Always post article URLs or their archived version URLs as sources, NOT screenshots. Help the blind users.

5: personal rants of Big Tech CEOs like Elon Musk are unwelcome (does not include posts about their companies affecting wide range of people)

6: no advertisement posts unless verified as legitimate and non-exploitative/non-consumerist

7: crypto related posts, unless essential, are disallowed

founded 6 years ago
MODERATORS