89

“Suppose you have a model that assigns itself a 72 percent chance of being conscious,” Douthat began. “Would you believe it?”

Amodei called it a “really hard” question to answer, but hesitated to give a yes or no answer.

Be nice to the stochastic parrots, folks.

you are viewing a single comment's thread
view the rest of the comments
[-] WhatDoYouMeanPodcast@hexbear.net 3 points 13 hours ago

@KobaCumTribute@hexbear.net

Right. The question I meant was thinking about how astrobiologists are looking for the presence of organic molecules to go like "oh hey, something else is fixing nitrogen!" or something of the like as a way to scan for life somewhere else. They define "life as we know it" so they're not scanning for silicone based life or sentient crystals or something to make a more narrow and testable hypothesis.

So the question I meant was not, "what if LLMs get better?" because we here generally agree that LLMs have a limit that's shy of having an internal model. We all, more or less, can cite the studies that assert this and it's generally where the idea originates from. But now, what if code created a philosophical zombie? You have proof that this is not life as we know it, but it appears to have an internal model, yearns for agency, and portrays suffering? It certainly doesn't have internal existence, but it does have dynamism and persistent modeling.

[-] purpleworm@hexbear.net 6 points 12 hours ago* (last edited 12 hours ago)

philosophical zombie

P-zombies are question-begging. If it can do everything a real consciousness "would" do, then it is fully modeling the consciousness to the point that a comparable consciousness exists within the process of the simulation (in order to consistently get all these behaviors from it) and therefore the overall system is based on a consciousness. P-zombies are assuming that there is otherwise a ghost in the machine, which only serves to confuse discussions.

Edit: Phrased another way, to get a machine or whatever that can fully replicate the behaviors of being conscious, you would need to "build" a consciousness, even if it looks very different from ours, in order to get that result.

Also we probably should not make a consciousness that is actually like a human's. A robot that feels grief isn't thereby really helping anyone, including the robot.

this post was submitted on 17 Feb 2026
89 points (100.0% liked)

Slop.

793 readers
558 users here now

For posting all the anonymous reactionary bullshit that you can't post anywhere else.

Rule 1: All posts must include links to the subject matter, and no identifying information should be redacted.

Rule 2: If your source is a reactionary website, please use archive.is instead of linking directly.

Rule 3: No sectarianism.

Rule 4: TERF/SWERFs Not Welcome

Rule 5: No bigotry of any kind, including ironic bigotry.

Rule 6: Do not post fellow hexbears.

Rule 7: Do not individually target federated instances' admins or moderators.

founded 1 year ago
MODERATORS