Well yeah - because that’s not how LLMs work. They generate sentences that conform to the word-relationship statistics that were generated during the training (e.g. making comparisons between all the data the model was trained on). It does not have any kind of logic and it does not know things. It literally just navigates a complex web of relationships between words using the prompt as a guide, creating sentences that look statistically similar to the average of all trained sentences.
TL;DR; It’s an illusion. You don’t need to run experiments to realize this, you just need to understand how AI/ML works.
It does not have any kind of logic and it does not know things. It literally just navigates a complex web of relationships between words using the prompt as a guide, creating sentences that look statistically similar to the average of all trained sentences.
While all of what you say is true on a technical level, it might evade the core question. Like, maybe that's all human brains do as well, just in a more elaborate fashion. Maybe logic and knowing are emergent properties of predicting language. If these traits help to make better word predictions, maybe they evolve to support prediction.
In many cases, current LLMs have shown surprising capability to provide helpful answers, engage in philosophical discussion or show empathy. All in the duck typing sense, of course. Sure, you can brush all that away by saying "meh, it's just word stochastics", but maybe then, word stochastics is actually more than 'meh'.
I think it's a little early to take a decisive stance. We poorly understand intelligence in humans, which is a bad place to judge other forms. We might learn more about us and them as development continues.
Well yeah - because that’s not how LLMs work. They generate sentences that conform to the word-relationship statistics that were generated during the training (e.g. making comparisons between all the data the model was trained on). It does not have any kind of logic and it does not know things. It literally just navigates a complex web of relationships between words using the prompt as a guide, creating sentences that look statistically similar to the average of all trained sentences.
TL;DR; It’s an illusion. You don’t need to run experiments to realize this, you just need to understand how AI/ML works.
While all of what you say is true on a technical level, it might evade the core question. Like, maybe that's all human brains do as well, just in a more elaborate fashion. Maybe logic and knowing are emergent properties of predicting language. If these traits help to make better word predictions, maybe they evolve to support prediction.
In many cases, current LLMs have shown surprising capability to provide helpful answers, engage in philosophical discussion or show empathy. All in the duck typing sense, of course. Sure, you can brush all that away by saying "meh, it's just word stochastics", but maybe then, word stochastics is actually more than 'meh'.
I think it's a little early to take a decisive stance. We poorly understand intelligence in humans, which is a bad place to judge other forms. We might learn more about us and them as development continues.
Tell that to all the tech bros on the internet are convinced that ChatGPT means AGI is just around the corner...