this post was submitted on 07 Aug 2024
129 points (93.3% liked)
Technology
59434 readers
2976 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
LLMs will not give us AGI. This is obvious to anyone who knows how they work.
Maybe it can. If you find a way to port everything to text by hooking in different models, the LLM might be able to reason about everything you throw at it. Who even defines how AGI should be implemented?
The LLM is just trying to produce output text that resembles the patterns it saw in the training set. There's no "reasoning" involved.
A LLM is basically just an orchestration mechanism. Saying a LLM doesn't do reasoning is like saying a step function can't send an email. The step function can't, but the lambda I've attached to it sure as shit can.
ChatGPT isn't just a model sat somewhere. There are likely hundreds of services working behind the scenes to coerce the LLM into getting the right result. That might be entity resolution, expert mapping, perhaps even techniques that will "reason".
The first initial point is right, though. This ain't AGI, not even close. It's just your standard compositional stuff with a new orchestration mechanism that is better suited for long-form responses - and wild hallucinations...
Source: Working on this right now.
Edit: Imagine downvoting someone that literally works on LLM's for a living. Lemmy is a joke sometimes...
they're very very anti ai and crypto. I understand being against those, but lemmys stop caring about logic when it comes to those topics.
I think many in the AI space are against the current state of how AI is being pushed, probably just as much as the average tech person.
What is ridiculous is that Lemmy prides itself as both forward-thinking and tech focused, and in reality it is far more close-minded than Reddit or even Twitter. Given how heavily used Mastodon is in tech spheres it makes Lemmy look like an embarrassment to the fediverse...
Yeah, there is every reason to be sceptical of the hype around AI, in particular from the big tech companies. But to a significant part of the Lemmy userbase saying "AI" is like saying "witch" in 17th century Salem. To the point where people who are otherwise very much left wing and anti-corporate will take pro-IP/corporate copyright maximalist stances just becuase that would be bad for AI.
You might be interested in Nim then when you get a chance. Talk about orchestration
https://developer.nvidia.com/nim