88

“Suppose you have a model that assigns itself a 72 percent chance of being conscious,” Douthat began. “Would you believe it?”

Amodei called it a “really hard” question to answer, but hesitated to give a yes or no answer.

Be nice to the stochastic parrots, folks.

you are viewing a single comment's thread
view the rest of the comments
[-] DasRav@hexbear.net 19 points 13 hours ago

The answer: "I made it the fuck up"

[-] LeeeroooyJeeenkiiins@hexbear.net 7 points 12 hours ago* (last edited 12 hours ago)

I mean to be fair can either of you "show the calculations" that "prove" consciousness

"Cogito ergo sum" sure buddy sure you're not just making that up??

[-] DasRav@hexbear.net 1 points 5 hours ago* (last edited 5 hours ago)

That's a terrible argument. It wasn't me making the claim so I don't know why I gotta prove anything. The frauds making the theft machines have to prove it. If the guy says '“Suppose you have a model that assigns itself a 72 percent chance of being conscious" and then the thing can't show it's math, how is it on me to prove I can do that math I haven't seen next?

[-] purpleworm@hexbear.net 3 points 10 hours ago

We can pass the Turing test and it can't. I don't see what your point is, and it seems detrimental to the purpose of pushing back on the bullshit in the OOP.

[-] fox@hexbear.net 6 points 9 hours ago

LLMs pass the Turing test, which is just proof of the Turing test being a poor test of anything but people's gullibility.

[-] purpleworm@hexbear.net 3 points 8 hours ago* (last edited 8 hours ago)

Here's a post from someone who also doesn't like the Turing Test. As they point out, you can pedantically call it a Turing Test but it's a version that was very deliberately rigged in favor of the AI, including the tests only being ~4-5 exchanges, which is completely ridiculous for trying to make a thorough evaluation by this metric. I don't think it has all that much to do with gullibility because the limitations of these models become much more apparent over time. It's just more headline-mill bullshit. I don't share the author's view that the "coaching" is a relevant factor to consider the outcome's validity, though.

Granted, I'm also not trying to say that the Turing test is the ultimate metric or anything, just that it's an extremely low baseline that, employed in good faith, current LLMs plainly do not clear. They often can't even pass for one prompt if the one prompt is "spell strawberry" or something like that.

Edit: I also think the alternative that they propose is not great because it's mostly a question of video-processing. It's getting too hung up on information-processing questions to use something other than text.

this post was submitted on 17 Feb 2026
88 points (100.0% liked)

Slop.

793 readers
558 users here now

For posting all the anonymous reactionary bullshit that you can't post anywhere else.

Rule 1: All posts must include links to the subject matter, and no identifying information should be redacted.

Rule 2: If your source is a reactionary website, please use archive.is instead of linking directly.

Rule 3: No sectarianism.

Rule 4: TERF/SWERFs Not Welcome

Rule 5: No bigotry of any kind, including ironic bigotry.

Rule 6: Do not post fellow hexbears.

Rule 7: Do not individually target federated instances' admins or moderators.

founded 1 year ago
MODERATORS