88
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
this post was submitted on 17 Feb 2026
88 points (100.0% liked)
Slop.
793 readers
558 users here now
For posting all the anonymous reactionary bullshit that you can't post anywhere else.
Rule 1: All posts must include links to the subject matter, and no identifying information should be redacted.
Rule 2: If your source is a reactionary website, please use archive.is instead of linking directly.
Rule 3: No sectarianism.
Rule 4: TERF/SWERFs Not Welcome
Rule 5: No bigotry of any kind, including ironic bigotry.
Rule 6: Do not post fellow hexbears.
Rule 7: Do not individually target federated instances' admins or moderators.
founded 1 year ago
MODERATORS
Consciousness requires dynamism, persistent modeling and internal existence. These models are like massive, highly compressed and abstracted books: static objects that are referenced by outside functions in a way that recreates and synthetically forms data by feeding it an input and then feeding it its own output over and over until the script logic decides to return the output as text to the user. They are conscious the way a photograph is a person when you view it: an image of reality frozen in place that lets an outside observer synthesize other data through inference, guesswork, and just making up the missing bits.
Some people are very insistent that you can't make a conscious machine at all, but I don't think that's true at all. The problem here is LLMs are just nonsense generators albeit very impressive ones. They don't do internal modeling and categorically can't, they're completely static once trained and can only "remember" things by storing them as a list of things that are always added to their input every time, etc. They don't have senses, they don't have thoughts, they don't have memories, they don't even have good imitations of these things. They're a dead end that, at the most, could eventually serve as a sort of translation layer in between a more sophisticated conscious machine and people, shortcutting the problem of trying to teach it language in addition to everything else it would need.
human (or debatably more precisely, animal or vertebrate etc idk where the line is) consciousness requires...
it would be harder to prove but there's nothing that says aliens or machines have to match us like that to have consciousness. LLMs certainly aren't of course.
@KobaCumTribute@hexbear.net
Right. The question I meant was thinking about how astrobiologists are looking for the presence of organic molecules to go like "oh hey, something else is fixing nitrogen!" or something of the like as a way to scan for life somewhere else. They define "life as we know it" so they're not scanning for silicone based life or sentient crystals or something to make a more narrow and testable hypothesis.
So the question I meant was not, "what if LLMs get better?" because we here generally agree that LLMs have a limit that's shy of having an internal model. We all, more or less, can cite the studies that assert this and it's generally where the idea originates from. But now, what if code created a philosophical zombie? You have proof that this is not life as we know it, but it appears to have an internal model, yearns for agency, and portrays suffering? It certainly doesn't have internal existence, but it does have dynamism and persistent modeling.
P-zombies are question-begging. If it can do everything a real consciousness "would" do, then it is fully modeling the consciousness to the point that a comparable consciousness exists within the process of the simulation (in order to consistently get all these behaviors from it) and therefore the overall system is based on a consciousness. P-zombies are assuming that there is otherwise a ghost in the machine, which only serves to confuse discussions.
Edit: Phrased another way, to get a machine or whatever that can fully replicate the behaviors of being conscious, you would need to "build" a consciousness, even if it looks very different from ours, in order to get that result.
Also we probably should not make a consciousness that is actually like a human's. A robot that feels grief isn't thereby really helping anyone, including the robot.