Kolmogorov complexity:
So we should see some proper definitions and basic results on the Kolmogorov complexity, like in modern papers, right? We should at least see a Kt or a pKt thrown in there, right?
Understanding IS compression — extracting structure from data. Optimal compression is uncomputable. Understanding is therefore always provisional, always improvable, never verifiably complete. This kills “stochastic parrot” from a second independent direction: if LLMs were memorizing rather than understanding, they could not generalize to inputs not in their training data. But they do. Generalization to novel input IS compression — extracting structure, not regurgitating sequences.
Fuck!
This somehow makes things even funnier. If he had any understanding of modern math, he would know that representing a set of things as points in some geometric space is one of the most common techniques in math. (A basic example: a pair of numbers can be represented by a point in 2D space.) Also, a manifold is an extremely broad geometric concept: knowing that two things are manifolds does not meant that they are the same or even remotely similar, without checking the details. There are tons of things you can model as a manifold if you try hard enough.
From what I see, Scoot read a paper modeling LLM inference with manifolds and thought "wow, cool!" Then he fished for neuroscience papers until he found one that modeled neurons using manifolds. Both of the papers have blah blah blah something something manifolds so there must be a deep connection!
(Maybe there is a deep connection! But the burden of proof is on him, and he needs to do a little more work than noticing that both papers use the word manifold.)