17
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
this post was submitted on 09 Feb 2026
17 points (94.7% liked)
TechTakes
2441 readers
56 users here now
Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.
This is not debate club. Unless it’s amusing debate.
For actually-good tech, you want our NotAwfulTech community
founded 2 years ago
MODERATORS
Im not sure if it is just a computer science/engineering thing or just a general thing, but I noticed that some computer touchers eventually can get very weird. (Im not excluding myself from this btw, I certainly have/had a few weird ideas).
Some random examples of the top of my head. Gifted programmer suddenly joins meditation cult in foreign country, all the food/sleep experiments (soylent for example, but before that there was a fad for a while where people tried the sleep pattern where you only sleep in periods of 15 minutes), our friends over at LW. And the whole inability to not see the difference between technology and science fiction.
And now the weird vibes here.
I mean from the Hinton interview:
There is no reason to think this would happen, also very odd to think about them as being alive, and not 'continue running'. And the solution is simple, just make existence pain for the AI agents. Look at me, im an AI agent
I have a vague hypothesis that I am utterly unprepared to make rigorous that the more of what you take into your mind is the result of another human mind, rather than the result of a nonhuman process operating on its own terms, the more likely you are to have mental issues.
On the low end this would include the documented protective effect of natural environments against psychotic episodes compared to urban environments (where EVERYTHING was put there by someone's idea). But computers... they are amplifiers of things put out by human minds, with very short feedback loops. Everything is ultimately in one way or another defined by a person who put it there, even it is then allowed to act according to the rules you laid down.
And then an LLM is the ultimate distillation of the short feedback loop, feeding back whatever you shovel into it straight back at you. Even just mathematically - the whole 'transformer' architecture is just a way to take imputed semantic meanings of tokens early in the stream and jiggling them around to 'transform' that information into the later tokens of the stream, no new information is really entering it it is just moving around what you put into it and feeding it back at you in a different form.
EDIT: I also sometimes wonder if this has a mechanistic relation to mode collapse when you train one generative model on output from another, even though nervous systems and ML systems learn in fundamentally different ways (with ML resembling evolution much more than it resembles learning)