this post was submitted on 23 Nov 2023
226 points (99.6% liked)

the_dunk_tank

15923 readers
1 users here now

It's the dunk tank.

This is where you come to post big-brained hot takes by chuds, libs, or even fellow leftists, and tear them to itty-bitty pieces with precision dunkstrikes.

Rule 1: All posts must include links to the subject matter, and no identifying information should be redacted.

Rule 2: If your source is a reactionary website, please use archive.is instead of linking directly.

Rule 3: No sectarianism.

Rule 4: TERF/SWERFs Not Welcome

Rule 5: No ableism of any kind (that includes stuff like libt*rd)

Rule 6: Do not post fellow hexbears.

Rule 7: Do not individually target other instances' admins or moderators.

Rule 8: The subject of a post cannot be low hanging fruit, that is comments/posts made by a private person that have low amount of upvotes/likes/views. Comments/Posts made on other instances that are accessible from hexbear are an exception to this. Posts that do not meet this requirement can be posted to [email protected]

Rule 9: if you post ironic rage bait im going to make a personal visit to your house to make sure you never make this mistake again

founded 4 years ago
MODERATORS
 

Literally just mainlining marketing material straight into whatever’s left of their rotting brains.

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 29 points 1 year ago (10 children)

I don't know where everyone is getting these in depth understandings of how and when sentience arises. To me, it seems plausible that simply increasing processing power for a sufficiently general algorithm produces sentience. I don't believe in a soul, or that organic matter has special properties that allows sentience to arise.

I could maybe get behind the idea that LLMs can't be sentient, but you generalized to all algorithms. As if human thought is somehow qualitatively different than a sufficiently advanced algorithm.

Even if we find the limit to LLMs and figure out that sentience can't arise (I don't know how this would be proven, but let's say it was), you'd still somehow have to prove that algorithms can't produce sentience, and that only the magical fairy dust in our souls produce sentience.

That's not something that I've bought into yet.

[–] [email protected] 46 points 1 year ago* (last edited 1 year ago) (22 children)

so i know a lot of other users will just be dismissive but i like to hone my critical thinking skills, and most people are completely unfamiliar with these advanced concepts, so here's my philosophical examination of the issue.

the thing is, we don't even know how to prove HUMANS are sentient except by self-reports of our internal subjective experiences.

so sentience/consciousness as i discuss it here refers primarily to Qualia, or to a being existing in such a state as to experience Qualia. Qualia are the internal, subjective, mental experiences of external, physical phenomena.

here's the task of people that want to prove that the human brain is a meat computer: Explain, in exact detail, how (i.e. the procsses by which) Qualia, (i.e. internal, subjective, mental experiences) arise from external, objective, physical phenomena.

hint: you can't. the move by physicalist philosophy is simply to deny the existence of qualia, consciousness, and subjective experience altogether as 'illusory' - but illusory to what? an illusion necessarily has an audience, something it is fooling or decieving. this 'something' would be the 'consciousness' or 'sentience' or to put it in your oh so smug terms the 'soul' that non-physicalist philosophy might posit. this move by physicalists is therefore syntactically absurd and merely moves the goalpost from 'what are qualia' to 'what are those illusory, deceitful qualia decieving'. consciousness/sentience/qualia are distinctly not information processing phenomena, they are entirely superfluous to information processing tasks. sentience/consciousness/Qualia is/are not the information processing, but internal, subjective, mental awareness and experience of some of these information processing tasks.

Consider information processing, and the kinds of information processing that our brains/minds are capable of.

What about information processing requires an internal, subjective, mental experience? Nothing at all. An information processing system could hypothetically manage all of the tasks of a human's normal activities (moving, eating, speaking, planning, etc.) flawlessly, without having such an internal, subjective, mental experience. (this hypothetical kind of person with no internal experiences is where the term 'philosophical zombie' comes from) There is no reason to assume that an information processing system that contains information about itself would have to be 'aware' of this information in a conscious sense of having an internal, subjective, mental experience of the information, like how a calculator or computer is assumed to perform information processing without any internal subjective mental experiences of its own (independently of the human operators).

and yet, humans (and likely other kinds of life) do have these strange internal subjective mental phenomena anyway.

our science has yet to figure out how or why this is, and the usual neuroscience task of merely correlating internal experiences to external brain activity measurements will fundamentally and definitionally never be able to prove causation, even hypothetically.

so the options we are left with in terms of conclusions to draw are:

  1. all matter contains some kind of (inhuman) sentience, including computers, that can sometimes coalesce into human-like sentience when in certain configurations (animism)
  2. nothing is truly sentient whatsoever and our self reports otherwise are to be ignored and disregarded (self-denying mechanistic physicalist zen nihilism)
  3. there is something special or unique or not entirely understood about biological life (at least human life if not all life with a central nervous system) that produces sentience/consciousness/Qualia ('soul'-ism as you might put it, but no 'soul' is required for this conclusion, it could just as easily be termed 'mystery-ism' or 'unknown-ism')

And personally the only option i have any disdain for is number 2, as i cannot bring myself to deny the very thing i am constantly and completely immersed inside of/identical with.

[–] [email protected] 10 points 1 year ago (2 children)

here's the task of people that want to prove that the human brain is a meat computer: Explain, in exact detail, how (i.e. the procsses by which) Qualia, (i.e. internal, subjective, mental experiences) arise from external, objective, physical phenomena.

hint: you can't.

Why not? I understand that we cannot, at this particular moment, explain every step of the process and how every cause translates to an effect until you have consciousness, but we can point at the results of observation and study and less complex systems we understand the workings of better and say that it's most likely that the human brain functions in the same way, and these processes produce Qualia.

It's not absolute proof, but there's nothing wrong with just saying that from what we understand, this is the most likely explanation.

Unless I'm misunderstanding what you're saying here, why is the idea that it can't be done the takeaway rather than it will take a long time for us to be able to say whether or not it's possible?

and the usual neuroscience task of merely correlating internal experiences to external brain activity measurements will fundamentally and definitionally never be able to prove causation, even hypothetically.

Once you believe you understand exactly what external brain activity leads to particular internal experiences, you could surely prove it experimentally by building a system where you can induce that activity and seeing if the system can report back the expected experience (though this might not be possible to do ethically).

As a final point, surely your own argument above about an illusion requiring an observer rules out concluding anything along the lines of point 2?

[–] [email protected] 16 points 1 year ago (6 children)

Why not?

because qualia are fundamentally a subjective phenomena, and there is no concievable way to arrive at subjective phenomena via objective physical quantitites/measurements.

Once you believe you understand exactly what external brain activity leads to particular internal experiences, you could surely prove it experimentally by building a system where you can induce that activity and seeing if the system can report back the expected experience (though this might not be possible to do ethically).

this is not true. for example, take the example of a radio, presented to uncontacted people who do not know what a radio is. It would be reasonable for these people to assume that the voices coming from the radio are produced in their entirety inside the radio box/chassis, after all, when you interfere with the internals of the radio, it effects which voices come out and in what quality. and yet, because of a fundamental lack of understanding of the mechanics of the radio, and a lack of knowledge of how radios are used and how radio programs are produced and performed, this is an entirely incorrect assessment of the situation.

in this metaphor, the 'radio' is analogous to the 'brain' or 'body', and the 'voices' or radio programs are the 'consciousness', that is assumed to be coming form inside the box, but is in fact coming from outside the box, from completely invisible waves in the air. the 'uncontacted people' are modern scientists trying to understand that which is unknown to humanity.

this isn't to say that i think the brain is a radio, although that is a fun thought experiment, but to demonstrate why correlation does not, in fact, necessarily imply causation, especially in the case of the neural correlates of consciousness. consciousness definitely impinges upon or depends upon the physical brain, it is in some sense affected by it, no one would argue this point seriously, but to assume causal relationship is intellectually lazy.

[–] [email protected] 7 points 1 year ago (1 children)

because qualia are fundamentally a subjective phenomena, and there is no concievable way to arrive at subjective phenomena via objective physical quantitites/measurements.

Having done some quick reading, I can see that qualia are definitionally subjective, but I would question how anyone could assert that they possess internal mental experiences that "no amount of purely physical information includes.", or that such a thing can even exist with any level of confidence. Certainly not enough confidence to structure an argument around. The justification seems to be the idea that because we cannot do something now, that thing cannot be done. I don't find that convincing.

This might be going too far into the analogy, but I think the problem with a comparison to a radio is that if you examine the radio down to its smallest part, and then assemble a second radio, that radio will behave in the same as the first.
Presumably as well, with enough examination, it would come to be understood that the voices coming from the radio are produced somewhere else, and there would be no reason for anyone to think that the voices themselves are appearing from an intangible and inherently subjective origin. If consciousness is essentially a puppeteer for the physical human body, that doesn't preclude consciousness existing physically somewhere else, and that the "broadcaster" isn't something capable of examination or imitation.

The whole argument seems to boil down to "maybe consciousness doesn't work the way science would currently suggest it does." but doesn't present any evidence that the consciousness is somehow unsolvable.

but to assume causal relationship is intellectually lazy.

Instead, assuming that an undetectable intangible and fundamentally improvable mechanism is behind consciousness without proof is worse than lazy, it's magical thinking. While I don't think you could ever prove that that wasn't the case, it should only seriously be entertained once every other option has been thoroughly exhausted.

(Reading this back, this feels quite confrontational. I don't intend it to be, but I lack the ability to word it in the tone that I would prefer.)

[–] [email protected] 7 points 1 year ago (2 children)

how anyone could assert that they possess internal mental experiences that "no amount of purely physical information includes.", or that such a thing can even exist with any level of confidence.

The justification seems to be the idea that because we cannot do something now, that thing cannot be done. I don't find that convincing.

its not just that we cannot do it now, its that it is literally definitionally impossible even conceptually to arrive at or explain subjectivity, assuming a physicalist model of the world that specifically discludes it in principle.

the claim is not that consciousness is 'unsolveable', but that it is unsolved, and that it is irreducible to terms of pure information processing. subjectivity is entirely separate from and unnecessary for information processing.

This might be going too far into the analogy

correct, it was merely to elucidate the difference between causation and correlation and the scientific method and attitude. the metaphor is not designed to interrogate subjectivity.

Instead, assuming that an undetectable intangible and fundamentally improvable mechanism is behind consciousness without proof is worse than lazy, it's magical thinking. While I don't think you could ever prove that that wasn't the case, it should only seriously be entertained once every other option has been thoroughly exhausted.

no, instead one should assume nothing, like a scientist should. you assume that you do not know until you actually do.

to go back to the analogy you are here like one of the uncontacted people encountering a radio, and, after much experimentation and analysis among your group has concluded that the voice cannot come from inside but form some as yet unknown source outside, you call them insane for positing even the hypothetical existence of such a thing instead of assuming it comes from inside in some way we don't yet understand (but are the assumed teleological inevitability of our current understanding which obviously never needs to be revised).

load more comments (2 replies)
load more comments (5 replies)
[–] [email protected] 13 points 1 year ago (2 children)
[–] [email protected] 13 points 1 year ago* (last edited 1 year ago) (4 children)

Donald Duck is correct here but also that’s precisely why techbros are so infuriating. They take that conclusion and then use it to disregard everything except the one thing they conveniently think isn’t based on chemicals, like free market capitalism or Eliezer “Christ the Second” Yud

Dismissing emotions just because they are chemicals is nonsensical. It makes no sense that that alone would invalidate anything whatsoever. But these people think it does because they are conditioned by Protestantism to think that all meaning has to come from a divine and unshakeable authority. That’s why they keep reinventing God, so they have something to channel their legitimate emotions through that their delusional brain can’t invalidate.

[–] [email protected] 9 points 1 year ago (1 children)

My issue with, say, "love is chemicals" isn't that the experience of feeling love is neurochemical activity. It's the crude reductionist conclusion of "and therefore it is meaningless just like based Rick Sanchez said, get schwifty!" so-true

Similarly, I don't hold a position that living brains are impossible to fully understand; it's that there's more left to know and a lot of unknowns left to explore. The implication of some people in this thread is that you must choose between "LLMs are at least as conscious as human beings or are getting there very soon" or "I am a faith healer crystal toucher sprinkled with fairy dust" which is a bullshit false dichotomy.

[–] [email protected] 8 points 1 year ago (1 children)

Yes, I agree completely. I had to rewrite my comment multiple times to clarify that, but yeah. Sorry :(

[–] [email protected] 9 points 1 year ago (1 children)

I sort of regret posting that meme because it was more cheeky and silly than an actual position I was taking, myself. The "dae le meat computers" reductionism enjoyer I was replying to (with the "therefore you must believe that LLMs are that close to sapience or else you believe in souls and are living in a demon haunted world unlike my enlightened euphoric Reddit New Atheist self" take) was abrasive enough where I was trying some levity but it didn't go over well.

[–] [email protected] 7 points 1 year ago (1 children)

I understand, either way the meme you posted is funny though because it would piss techbros off

[–] [email protected] 7 points 1 year ago

I understand, either way the meme you posted is funny though because it would piss techbros off

Judging by the reactions it got, it certainly did. sit-back-and-enjoy

load more comments (3 replies)
[–] [email protected] 5 points 1 year ago (2 children)

"All knowledge is unprovable and so nothing can be known" is a more hopeless position than "existence is absurd and meaning has to come from within". I shall both fight and perish.

[–] [email protected] 6 points 1 year ago* (last edited 1 year ago) (3 children)

"All knowledge is unprovable and so nothing can be known"

Silly meme that I had just posted aside, that isn't my actual position and I don't think that is the position others here have taken. I said that there is a lot more left to be known and the current academic leading edge of neuroscience (not tech company marketing hype or pop nihilistic reductionistic Reddit New Atheist takes) backs that up.

I shall both fight and perish.

From here it just looks like you're just touching the computer and doing the heavy lifting for LLM hype marketers.

load more comments (3 replies)
load more comments (1 replies)
[–] [email protected] 7 points 1 year ago* (last edited 1 year ago) (1 children)

I think it does a lot of undue (and hopefully unintentional) heavy lifting for tech company hype marketers when someone implies that LLM treat printers might be comparable (or synonymous) to living organic brains because of the product's imitative presentation.

https://arxiv.org/abs/2311.09247

[–] [email protected] 7 points 1 year ago (3 children)

on a related note, dropping this rare banger line from wikipedia:

Some philosophers of mind, like Daniel Dennett, argue that qualia do not exist. Other philosophers, as well as neuroscientists and neurologists, believe qualia exist and that the desire by some philosophers to disregard qualia is based on an erroneous interpretation of what constitutes science.[2]

citation text from the wiki page for reference

Damasio, Antonio R. (2000). The feeling of what happens: body and emotion in the making of consciousness. A Harvest book. San Diego, CA: Harcourt. ISBN 978-0-15-601075-7. Edelman, Gerald M.; Gally, Joseph A.; Baars, Bernard J. (2011). "Biology of Consciousness". Frontiers in Psychology. 2 (4): 4. doi:10.3389/fpsyg.2011.00004. ISSN 1664-1078. PMC 3111444. PMID 21713129. Edelman, Gerald Maurice (1992). Bright air, brilliant fire: on the matter of the mind. New York: BasicBooks. ISBN 978-0-465-00764-6. Edelman, Gerald M. (2003). "Naturalizing Consciousness: A Theoretical Framework". Proceedings of the National Academy of Sciences of the United States of America. 100 (9): 5520–5524. doi:10.1111/j.1600-0536.1978.tb04573.x. ISSN 0027-8424. JSTOR 3139744. PMID 154377. S2CID 10086119. Retrieved 2023-07-19. Koch, Christof (2020). The feeling of life itself: why consciousness is widespread but can't be computed (First MIT Press paperback edition 2020 ed.). Cambridge, MA London: The MIT Press. ISBN 978-0-262-53955-5. Llinás, Rodolfo Riascos; Llinás, Rodolfo R. (2002). I of the vortex: from neurons to self. A Bradford book (1 ed.). Cambridge, Mass. London: MIT Press. pp. 202–207. ISBN 978-0-262-62163-2. Oizumi, Masafumi; Albantakis, Larissa; Tononi, Giulio (2014-05-08). Sporns, Olaf (ed.). "From the Phenomenology to the Mechanisms of Consciousness: Integrated Information Theory 3.0". PLOS Computational Biology. 10 (5): e1003588. Bibcode:2014PLSCB..10E3588O. doi:10.1371/journal.pcbi.1003588. ISSN 1553-7358. PMC 4014402. PMID 24811198. Overgaard, M.; Mogensen, J.; Kirkeby-Hinrup, A., eds. (2021). Beyond neural correlates of consciousness. Routledge Taylor & Francis. Ramachandran, V.; Hirstein, W. (March 1997). "What Does Implicit Cognition Tell Us About Consciousness?". Consciousness and Cognition. 6 (1): 148. doi:10.1006/ccog.1997.0296. ISSN 1053-8100. S2CID 54335111. Tononi, Giulio; Boly, Melanie; Massimini, Marcello; Koch, Christof (July 2016). "Integrated information theory: from consciousness to its physical substrate". Nature Reviews. Neuroscience. 17 (7): 450–461. doi:10.1038/nrn.2016.44. ISSN 1471-0048. PMID 27225071. S2CID 21347087.

[–] [email protected] 13 points 1 year ago* (last edited 1 year ago)

> be me
> literal philosopher of mind
> experiences things every moment of my life
> is asked if experiences exist
> “nah experiences aren’t real”

[–] [email protected] 7 points 1 year ago

"Because there is disagreement on what consciousness is, it must be an illusion. You do not exist, you are only a weird metaphysical phantasm which is somehow a more grounded and tenable position." oooaaaaaaauhhh

[–] [email protected] 6 points 1 year ago (1 children)

This is a bad summary of Dennett's view, or at least a misleading one. He thinks that 'qualia' as most philosophers of mind define the term doesn't refer to anything, and is just a weasel word obscuring that we really don't have much of an understanding of how brains do the things they do. Qualia get glossed as the "what-it's-like-ness" of experiences (e.g. the particular feeling of seeing the color blue), which isn't wrong, but is only part of the story. 'Qualia' is a technical term in the philosophy of mind literature, and has a lot of properties attached to it (privacy, incorrigibility, ineffability, so on). Dennett argues that qualia in that sense--the philosopher's qualia--is incoherent and internally inconsistent for a variety of reasons. This sometimes gets misrepresented as "Dennett thinks consciousness is an illusion" (a misreading that he, to be fair, could work harder to discourage), but that's not the view. His argument against the philosopher's qualia is pretty compelling, and doesn't imply that people aren't conscious. See "Quining Qualia" for a pretty accessible articulation of the argument.

load more comments (1 replies)
[–] [email protected] 7 points 1 year ago

God damn what a good post

load more comments (19 replies)
[–] [email protected] 24 points 1 year ago (4 children)

I’m no philosopher, but at lot of these questions seem very epistemological and not much different from religious ones (i.e. so what changes if we determine that life is a simulation). Like they’re definitely fun questions, but I just don’t see how they’ll be answered with how much is unknown. We’re talking “how did we get here” type stuff

I’m not so much concerned with that aspect as I am about the fact that it’s a powerful technology that will be used to oppress shrug-outta-hecks

[–] [email protected] 19 points 1 year ago (1 children)

I think it would be far less confusing to call them algorithmic statistical models rather than AI

[–] [email protected] 15 points 1 year ago

Absolutely, but AI is the marketing promise that they can hype and not deliever and milk until its dry

[–] [email protected] 13 points 1 year ago* (last edited 1 year ago)

Actually, yeah, you're on it. These questions are epistemological. They're also phenomenological. Testing AI is all about seeing how it responds and reacts just as much as they are about being. It's silly. When it comes to AI right now, existing is measured by reaction to see if it's imitating a human intelligence. I'm pretty sure "I react therefore I am" was never coined by any great, old philosopher. So, what can we learn from your observation? Nobody knows anything. Or at least, the supposed geniuses who make AI and test it believe that reaction measures intelligence.

load more comments (2 replies)
[–] [email protected] 23 points 1 year ago (4 children)

I don't know where everyone is getting these in depth understandings of how and when sentience arises.

It's exactly the fact that we don't how sentience forms that makes the acting like fucking chatgpt is now on the brink of developing it so ludicrous. Neuroscientists don't even know how it works, so why are these AI hypemen so sure they got it figured out?

The only logical answer is that they don't and it's 100% marketing.

Hoping computer algorithms made in a way that's meant to superficially mimic neural connections will somehow become capable of thinking on its own if they just become powerful enough is a complete shot in the dark.

load more comments (4 replies)
[–] [email protected] 20 points 1 year ago (1 children)

You're making a lot of assumptions about the human mind there.

[–] [email protected] 7 points 1 year ago* (last edited 1 year ago) (2 children)

What assumptions? I was careful to almost universally take a negative stance not a positive one. The only exception I see is my stance against the existence of the soul. Otherwise there are no assumptions, let alone ones specific to the mind.

[–] [email protected] 14 points 1 year ago* (last edited 1 year ago) (5 children)

As if human thought is somehow qualitatively different than a sufficiently advanced algorithm.

is an incredible claim, loaded with more assumptions than I have space for here. Human thought is a lot more than an algorithm arriving at outputs for inputs. I don't know about you, but I have an actual inner live, emotions, thoughts and dreams that are far removed from a rote, algorithmic processing of information.

I don't feel like going into more detail now, but if you wanna look at the AI marketing with a bit more of a critical distance, I'd recommend two things here:
a short read: Language Is a Poor Heuristic For Intelligence
a listen: We Are Not Software: David Bentley Hart with Acid Horizon

Edit: also wanna share this piece about generative AI here. The part about trading the meaning of things for the mean of things resonates all throughout these artificial parrots, whether they parrot text or visuals or sound.

[–] [email protected] 13 points 1 year ago (2 children)

I agree; Curious to see what hexbears think of my view:

Firstly there is no “theory of consciousness”. No proposed explanation has ever satisfied that burden of proof, even if they call themselves theories. “Brain = computer” is a retroactively applied analogy, just like everything was pneumatics 100 years ago and everything was wheels 2000 years ago and everything was fire…

I would think that assuming that if you process hard enough you get sentience is quite a religious belief. There is no basis for this assumption.

And materialism isn’t the same thing as physicalism. And just because a hypothesis is physical doesn’t mean it’s automatically correct. Not being a religious explanation is like the lowest bar that there’s ever been in history.

“Sentience is just algorithms” assumes a degree of understanding of the brain that we just don’t have, equates neurons firing to computer processing without reason, and assumes that processing must be the mechanism which leads to sentience without basis.

We don’t know anything about sentience, so going “well you can’t say it’s not computers” is like going “hypothetically there could be a unicorn that shits out solid gold bars that lives on Pluto.” Like, that’s not how the burden of proof works.

Not to mention the STEM “philosophy stoopid” dynamics going on here.

load more comments (2 replies)
[–] [email protected] 9 points 1 year ago

I don't know about you, but I have an actual inner live, emotions, thoughts and dreams that are far removed from a rote, algorithmic processing of information.

Either redditors don't, or they wish they didn't.

[–] [email protected] 6 points 1 year ago* (last edited 1 year ago) (2 children)

I don't know about you, but I have an actual inner live, emotions, thoughts and dreams that are far removed from a rote, algorithmic processing of information.

How do you know?

How can you know that live emotions, thoughts and dreams cannot and do not arise from a system of algorithms?

[–] [email protected] 9 points 1 year ago (2 children)

because fundamentally subjective phenomena can never be explained entirely in terms of objective physical quantitites without losing important aspects of the phenomena.

[–] [email protected] 6 points 1 year ago

Just because we can't do something with the tools we have available to us now, does not mean that the thing is impossible itself.

load more comments (1 replies)
load more comments (1 replies)
[–] [email protected] 5 points 1 year ago (3 children)

Just to be clear, the claim is that human thought is qualitatively different than an algorithm, I just haven't been convinced of the claim. I chose my words incredibly carefully here, this isn't me being pedantic.

Anyway, I don't know how you've come to the definitive conclusion that somehow emotions aren't information. Or that thoughts and dreams are somehow not outputs of some process.

Nothing you've outlined is necessarily impossible to derive as an output of some process. It's actually quite possible that they're only derived as an output of some process, unless you think they're spawned into existence without causes, which I think religious people do believe (this is the essence of a free soul). I'm not religious.

load more comments (3 replies)
load more comments (1 replies)
[–] [email protected] 8 points 1 year ago

An algorithm does not exist as a physical thing. When applied to computers, it's an abstraction over the physical processes taking place as the computer crunches numbers. To me, it's a massive assumption to decide that just because one type of process (neurons) can produce consciousness, so can another (CPUs and their various types of memories), even if they perform the same calculation.

[–] [email protected] 20 points 1 year ago

To me, it seems plausible that simply increasing processing power for a sufficiently general algorithm produces sentience. I don't believe in a soul, or that organic matter has special properties that allows sentience to arise.

this is the popular sentiment with programmers and spectators right now, but even taking all those assumptions as true, it still doesn't mean we are close to anything.

Consider the complexity of sentient, multicellular organism. That's trillions of cells all interacting with each-other and the environment concurrently. Even if you reduce that down to just the processes with a brain, that's still more things happening in and between those neurons than anything we could realistically model in a programme. Programmers like to reduce that complexity down by only looking at the synaptic connections between neurons, and ignoring the everything else the cells are doing.

[–] [email protected] 17 points 1 year ago (5 children)

I could maybe get behind the idea that LLMs can’t be sentient, but you generalized to all algorithms. As if human thought is somehow qualitatively different than a sufficiently advanced algorithm.

Any algorithm, by definition, has a finite number of specific steps and is made to solve some category of related problems. While humans certainly use algorithms to accomplish tasks sometimes, I don't think something as general as consciousness can be accurately called an algorithm.

[–] [email protected] 10 points 1 year ago (1 children)

Every human experience is necessarily finite and made up of steps, insofar as you can break down the experience of your mind into discrete thoughts.

[–] [email protected] 9 points 1 year ago (1 children)

That doesn't mean it's algorithmic, though. A whole branch of mathematics (and as consequence, physics) is non-algorithmic.

load more comments (1 replies)
load more comments (4 replies)
[–] [email protected] 15 points 1 year ago* (last edited 1 year ago) (7 children)

Well, my (admittedly postgrad) work with biology gives me the impression that the brain has a lot more parts to consider than just a language-trained machine. Hell, most living creatures don't even have language.

It just screams of a marketing scam. I'm not against the idea of AI. Although from an ethical standpoint I question bringing life into this world for the purpose of using it like a tool. You know, slavery. But I don't think this is what they're doing. I think they're just trying to sell the next Google AdSense

load more comments (7 replies)
[–] [email protected] 12 points 1 year ago (2 children)

That’s an unfalsifiable belief. “We don’t know how sentience works so they could be sentient” is easily reversed because it’s based entirely on the fact that we can’t technically disprove or prove it.

load more comments (2 replies)
[–] [email protected] 10 points 1 year ago* (last edited 1 year ago)

"I am a very smart atheist that can not be fooled by fairy tales, therefore LLMs sound like the exact same thing as living brains. I can not be sold a bad bill of goods; my contempt for religion means I believe tech company marketing hype." galaxy-brain

EDIT: "Also, tech companies are above superstitious beliefs." https://futurism.com/openai-employees-say-firms-chief-scientist-has-been-making-strange-spiritual-claims

Also, some light reading for those who need it.

https://arxiv.org/abs/2311.09247

[–] [email protected] 7 points 1 year ago* (last edited 1 year ago)

To me, it seems plausible that simply increasing processing power for a sufficiently general algorithm produces sentience.

How is that plausible? The human brain has more processing power than a snake's. Which has more power than a bacterium's (equivalent of a) brain. Those two things are still experiencing consciousness/sentience. Bacteria will look out for their own interests, will chatGPT do that? No, chatGPT is a perfect slave, just like every computer program ever written

chatGPT : freshman-year-"hello world"-program
human being : amoeba
(the : symbol means it's being analogized to something)

a human is a sentience made up of trillions of unicellular consciousnesses.
chatGPT is a program made up of trillions of data points. But they're still just data points, which have no sentience or consciousness.

Both are something much greater than the sum of their parts, but in a human's case, those parts were sentient/conscious to begin with. Amoebas will reproduce and kill and eat just like us, our lung cells and nephrons and etc are basically little tiny specialized amoebas. ChatGPT doesn't....do anything, it has no will