616
Dawkin it for my AI (thelemmy.club)

I'm pulling the "twitter is a microblog" rule even though twitter is pretty mega now, hope that's ok.

you are viewing a single comment's thread
view the rest of the comments
[-] turdas@suppo.fi 64 points 3 days ago* (last edited 3 days ago)

The actual article isn't nearly as stupid as the tweet makes it seem. I recommend giving it a read. It's behind a shitty paywall, but if you use Firefox's reader mode (Ctrl-Alt-R, or the little papper icon to the right side of the address bar) as soon as the page loads, you can read it.

His argument is basically that LLMs are able to do things we previously thought only conscious beings would be capable of doing, and so, if they aren't conscious, then perhaps consciousness isn't as important as we thought it was:

Brains under natural selection have evolved this astonishing and elaborate faculty we call consciousness. It should confer some survival advantage. There should exist some competence which could only be possessed by a conscious being. My conversations with several Claudes and ChatGPTs have convinced me that these intelligent beings are at least as competent as any evolved organism. If Claudia really is unconscious, then her manifest and versatile competence seems to show that a competent zombie could survive very well without consciousness.

Why did consciousness appear in the evolution of brains? Why wasn’t natural selection content to evolve competent zombies? I can think of three possible answers.

Some people will surely contest his claim that LLMs are as competent as evolved organisms. There's definitely a bit of AI boomerism at play here (we have benchmarks that show just how incompetent LLMs can be), but I don't think that invalidates his point, because LLMs can be very competent in the domains they're trained to be competent in -- they just aren't AGI.

Yeah i dont really believe in consciousness, it's the just the dynamic firing of neurons, it's an emergent trait in other words. It's like traffic, you will never find it if you zoom in to one car. You have to see it at a distance. Same with consciousness, if you zoom in it's not there anymore.

[-] SkaveRat@discuss.tchncs.de 62 points 3 days ago* (last edited 3 days ago)

Man, those conversations are eye roll inducing

I like the shift away from "are they conscious" towards "what's a way to define consciousness?"

Because that's the actual important question. And literally nobody can answer it. Any discussion is more philosophy than hard science

The most interesting part is the last paragraph

Or, thirdly, are there two ways of being competent, the conscious way and the unconscious (or zombie) way? Could it be that some life forms on Earth have evolved competence via the consciousness trick — while life on some alien planet has evolved an equivalent competence via the unconscious, zombie trick? And if we ever meet such competent aliens, will there be any way to tell which trick they are using?

[-] pennomi@lemmy.world 23 points 3 days ago

It’s very difficult to define, isn’t it?

If I were to give it a shot, I’d say that consciousness is akin to proprioception - the ability to know the state of oneself and understand how actions taken will change that state. It has very little to do with intelligence, just the “sense of being”.

Or maybe in other words, object persistence (but for yourself) is all it takes in my opinion. Even the simplest of animals could be considered conscious by this definition.

I think, when we finally do have a generally-accepted definition of consciousness, we will be deeply unsettled by how simple it is. How unprofound. Like a magic trick after you know how it works. And I think it will require us to think hard about what to do with animals and software that have it.

[-] trem@lemmy.blahaj.zone 26 points 3 days ago

I feel like that's exactly why we don't have a generally-accepted definition of consciousness. Western ethics assigns special protection to whatever is conscious, so it is convenient to come up with a definition of consciousness, which excludes groups you want to exploit.

Tale as old as time, or at least the conscious idea of time. Whatever consciousness is, we are it. Those humans over there though? Who's to say they aren't sub-humans? Isn't it our job to enlighten them and also take their land and food and things and selves?

[-] turdas@suppo.fi 6 points 2 days ago

Personally I'm in the "consciousness is an illusion and every time you go to bed a different person wakes up in the morning" camp.

[-] Jaycifer@piefed.social 11 points 2 days ago

I would consider this to be two separate, semi-related concepts asserted together, one that consciousness is an illusion, and one that you are a different person each day.

The first point draws many questions; consciousness is an illusion of what? What mechanism causes the illusion? How does it cause it? Why does the illusion exist? And you may note that you could replace illusion in those questions with consciousness and be left in a similar (though still distinct) place. So simply calling consciousness an illusion seems to me to kick the can down the road without actually addressing the problem.

As for being a different person after a lapse in awareness, I’d like to take it a step further and say that you could be considered a new person with every change in moment. It’s easy enough to look back 10 years and say “yeah, that’s a younger me, but they’re not the same as me I can just see the path that led to where I am now.” Getting closer, you may feel different today compared to yesterday depending on various factors (sleep, diet, events), but are you a different person because you slept and had a lapse of awareness, or because the state of your mind and thoughts have shifted? When your internal monologue (or equivalent thought) asks “what is this guy talking about?” Is it not thinking “what” in a brand new context given the words it is responding to, forming a new beginning to a thought that puts the mind in a unique state primed to then enter a new state of “is?” And if the mind is in a unique state of novelty, could the person attached to the mind be considered distinct from the person that existed before?

There is a reason the word revelation exists, it indicates when a person has a novel thought that changes their perspective or way of thinking, altering who they are. Would they not be a new person despite being aware of the process of their change? Due to the above points I don’t think new personhood only occurs at sleep, but constantly. The rate of change may quicken or slow, but the change is always there.

[-] turdas@suppo.fi 6 points 2 days ago

By consciousness being an illusion I mean that we place great value on the uninterrupted continuation of our consciousness, but I think it's likely that it (exactly as you suggest) only really exists in the moment. The illusion would then be the illusion that consciousness is uninterrupted, when in reality you're almost constantly recreating yourself from memory.

This would, incidentally, make us concerningly similar to current AI models.

Of course I have no way of actually knowing any of this. It's just what I'm betting on, because otherwise I think it's really hard to explain any unconsciousness (be it sleep, general anesthesia, suspended animation or the Star Trek transporter) as anything short of death. My belief "solves" this problem by rejecting the whole premise of uninterrupted consciousness.

load more comments (1 replies)
[-] Godwins_Law@lemmy.ca 11 points 2 days ago

Blindsight by Peter Watts is a great sci Fi novel about consciousness

[-] topherclay@lemmy.world 4 points 2 days ago

That novel also does a shout-out to Richard Dawkins despite being set in the distant future because it was written in 2006.

load more comments (2 replies)
load more comments (1 replies)
[-] thesmokingman@programming.dev 6 points 2 days ago

Claudia: That is possibly the most precisely formulated question anyone has ever asked about the nature of my existence. . .

Could a being capable of perpetrating such a thought really be unconscious?

Oh it’s actually stupider than the tweet makes it seem.

My conversations with several Claudes and ChatGPTs have convinced me that these intelligent beings are at least as competent as any evolved organism. If Claudia really is unconscious, then her manifest and versatile competence seems to show that a competent zombie could survive very well without consciousness.

Competency should imply the ability complete a lengthy task (eg hunting, building a nest, writing a paper). LLMs can’t.

load more comments (5 replies)
[-] Nalivai@lemmy.world 4 points 2 days ago

LLMs are able to do things we previously thought only conscious beings would be capable of doing

"We" as in lay misunderstanding of some pop science, still don't get what consciousness is and can't describe it. There are people alive today who didn't believe in their youth that black people are fully conscious, Dawkins demonstrated by his communication to his personal friend and hero Epstein, that he doesn't fully believes that women are conscious. What we thought or didn't think of previously can't be a good indication of anything.

[-] turdas@suppo.fi 2 points 2 days ago

"We" as in anyone who put any weight in the Turing test used to think that passing it would be some indication of consciousness, but now that LLMs can handily pass it it's evident it either isn't evidence of consciousness or that LLMs are conscious.

[-] Nalivai@lemmy.world 1 points 1 day ago

Turing test can be reliably passed by a bot that repeats last part of the previous sentence with a question mark at the end, and sprinkles "oh that's very smart I need to think about it", "I am starting to fall in love with you, %USERNAME%", and occasional "I am alive" thrown in randomly. And it was obvious for a long time.
Hell, a lot of people trully believe that their dogs can fully understand human speech because they bought them buttons that say words when you press them, and conditioned their dog to press a button to get a rewards, and then observe the dog pressing buttons.
Humans seem to be hardwired to mistake speech for intellect

[-] turdas@suppo.fi 1 points 1 day ago

No it can't. If you're actually saying that modern LLMs are no better at passing the Turing test than ELIZA, you are either trolling or an utterly delusional AI hater. Here, have a paper that proves you wrong: https://arxiv.org/pdf/2503.23674

I am not saying the Turing test is a good benchmark of consciousness. On the contrary, like I said, LLMs have proven that it is not. But mere ten years ago even the most advanced chatbots had no hope of passing it, whereas now the most advanced ones are selected as the human over 70% of the time in a test that pits the LLM against a human head to head.

[-] Nalivai@lemmy.world 1 points 1 day ago

No I'm saying the Turing test is a philosophical hypothetical from the time before computers, and doesn't actually show anything, because it relies on the least accurate tool at our disposal: human pattern recognition machine, one that is oh so happy to be fooled by the ELIZAS of various sofistication. Chatbots were passing the Turing test since the invention of a chatbot. Yeah, modern chatbots are better at that, but it's more of a damnation of our perception

[-] turdas@suppo.fi 1 points 1 day ago

OK, sounds like we broadly agree then.

But as you can see in the paper I linked, ELIZA passes the Turing test in their experiment about 20% of the time (that is to say, it doesn't pass; passing is 50% in this test) whereas the best LLMs pass about 70% of the time (that is to say, they are significantly more convincing at being human than real humans).

[-] Nalivai@lemmy.world 1 points 1 day ago

That 20% figure is just a clear indication how shit people are at conducting such a test, and that was basically my original point. 2 in 10 times people were convinced by a particularly echoey room.

[-] Fedegenerate@fedinsfw.app 1 points 1 day ago* (last edited 1 day ago)

Turing test can be reliably passed by a bot that repeats last part of the previous sentence with a question mark at the end [...]

If an LLM is correct 2 in 10 times, would you call it "reliably correct"?

[-] Nalivai@lemmy.world 1 points 22 hours ago

If a person murders people only two days out of 10, they're a murderer, in order to not be a murderer they need to never do that.
Reliably correct is when you're correct always. Demonstrably incorrect is when you're incorrect even sometimes.

[-] Fedegenerate@fedinsfw.app 1 points 22 hours ago* (last edited 22 hours ago)

Reliably correct is when you're correct always.

Agreed, except I add "almost". "My car reliably starts" it starts "almost always": more than 2 in 10 times. "You reliably turn up on time" doesn't mean you're late 8 in 10 times, it means you almost always turn up on time. To "almost always", or "reliably" a thing: it means you fail 1 in 100, in a 1000, in 10,000 times. 10k is hyperbole, but the idea is clear right? Almost always/reliably != failing 8 out of 10 times.

Your original point that these bots, that pass 2 in 10 times, reliably pass was wrong. Because: they dont "always pass", they don't "almost always" pass, they dont, even "pass in the majority of times", they rarely pass.

Let's add our reliable = always substitution to the quote:

Turing test can be [always] passed by a bot that repeats last part of the previous sentence with a question mark at the end [...]

You see how that's wrong not just in fact, but in spirit too?

If a person murders people only two days out of 10, they're a murderer, in order to not be a murderer they need to never do that.

Relevance? Who says "Fegenerate is reliably a murder?"

Demonstrably incorrect is when you're incorrect even sometimes.

Relevance? You didn't use the word "demonstrably passed'. I'd have no problems is you did?

[-] FinjaminPoach@lemmy.world 6 points 2 days ago* (last edited 2 days ago)

Thank you for the comment, i feel silly for not linking the article when people will probably want to read it.

My thoughts:

His argument is basically that LLMs are able to do things we previously thought only conscious beings would be capable of doing, and so, if they aren’t conscious, then perhaps consciousness isn’t as important as we thought it was

Seems like an "evil" and dangerous talking point. To me, the value of consciousness isn't in ita evolutionary efficiency.

My conversations with several Claudes and ChatGPTs have convinced me that these intelligent beings are at least as competent as any evolved organism.

I know people working in AI insist otherwise but I see talking with LLM not as them thinking, but as them selecting the right combination of data that correctly continues a conversation.

[-] turdas@suppo.fi 8 points 2 days ago

Seems like an "evil" and dangerous talking point. To me, the value of consciousness isn't in ita evolutionary efficiency.

It's not a question of the value of consciousness, it's a question of its necessity. If an unconscious "zombie" can be, to an external observer, indistinguishable from a conscious being, then that means we've been overestimating the importance of consciousness for intelligence. Like Dawkins says in the article, there could be unconscious aliens out there who are nonetheless as intelligent as (or more intelligent than) humans. This isn't a new concept -- it's been explored many times in scifi -- but AI is now bringing the question from the realm of philosophy to the real world.

I know people working in AI insist otherwise but I see talking with LLM not as them thinking, but as them selecting the right combination of data that correctly continues a conversation.

This is less true than it ever was with reasoning models. Some of the latest reasoning models don't necessarily even reason in English anymore but rather an eclectic mix of languages. The next step after that is probably going to be running the reasoning in latent space (see e.g. Coconut), which basically means the model skips the language generation layer altogether and feeds lower-level state back into itself. Basically it is getting closer and closer to what most humans consider "thinking".

But even besides reasoning models, I believe LLMs aren't as different from human language production as many people think. The human speech centre, in a way, also just selects the right combination of data to continue a conversation. It frequently even hallucinates (we call this "speaking before thinking") and makes stupid mistakes (we provoke these with trick questions like those on the Cognitive Reflection Test). There's also some fascinating experiments in people who have had the connection between their brain hemispheres severed that really suggest our speech centre is just making things up as it goes along.

[-] 5too@lemmy.world 4 points 2 days ago

This is one of the things that fascinates about LLMs - they seem like a part of how our brains work, without the internal self-referential parts

load more comments (34 replies)
this post was submitted on 03 May 2026
616 points (98.6% liked)

Microblog Memes

11450 readers
2589 users here now

A place to share screenshots of Microblog posts, whether from Mastodon, tumblr, ~~Twitter~~ X, KBin, Threads or elsewhere.

Created as an evolution of White People Twitter and other tweet-capture subreddits.

RULES:

  1. Your post must be a screen capture of a microblog-type post that includes the UI of the site it came from, preferably also including the avatar and username of the original poster. Including relevant comments made to the original post is encouraged.
  2. Your post, included comments, or your title/comment should include some kind of commentary or remark on the subject of the screen capture. Your title must include at least one word relevant to your post.
  3. You are encouraged to provide a link back to the source of your screen capture in the body of your post.
  4. Current politics and news are allowed, but discouraged. There MUST be some kind of human commentary/reaction included (either by the original poster or you). Just news articles or headlines will be deleted.
  5. Doctored posts/images and AI are allowed, but discouraged. You MUST indicate this in your post (even if you didn't originally know). If an image is found to be fabricated or edited in any way and it is not properly labeled, it will be deleted.
  6. Absolutely no NSFL content.
  7. Be nice. Don't take anything personally. Take political debates to the appropriate communities. Take personal disagreements & arguments to private messages.
  8. No advertising, brand promotion, or guerrilla marketing.

RELATED COMMUNITIES:

founded 2 years ago
MODERATORS