this post was submitted on 23 Nov 2023
226 points (99.6% liked)

the_dunk_tank

15923 readers
7 users here now

It's the dunk tank.

This is where you come to post big-brained hot takes by chuds, libs, or even fellow leftists, and tear them to itty-bitty pieces with precision dunkstrikes.

Rule 1: All posts must include links to the subject matter, and no identifying information should be redacted.

Rule 2: If your source is a reactionary website, please use archive.is instead of linking directly.

Rule 3: No sectarianism.

Rule 4: TERF/SWERFs Not Welcome

Rule 5: No ableism of any kind (that includes stuff like libt*rd)

Rule 6: Do not post fellow hexbears.

Rule 7: Do not individually target other instances' admins or moderators.

Rule 8: The subject of a post cannot be low hanging fruit, that is comments/posts made by a private person that have low amount of upvotes/likes/views. Comments/Posts made on other instances that are accessible from hexbear are an exception to this. Posts that do not meet this requirement can be posted to [email protected]

Rule 9: if you post ironic rage bait im going to make a personal visit to your house to make sure you never make this mistake again

founded 4 years ago
MODERATORS
 

Literally just mainlining marketing material straight into whatever’s left of their rotting brains.

top 50 comments
sorted by: hot top controversial new old
[–] [email protected] 117 points 1 year ago (9 children)

For fucks sake it's just an algorithm. It's not capable of becoming sentient.

Have I lost it or has everyone become an idiot?

[–] [email protected] 58 points 1 year ago (3 children)

Crude reductionist beliefs such as humans being nothing more than "meat computers" and/or "stochastic parrots" have certainly contributed to the belief that a sufficiently elaborate LLM treat printer would be at least as valid a person as actual living people.

[–] [email protected] 39 points 1 year ago (19 children)

This is verging on a religious debate, but assuming that there's no "spiritual" component to human intelligence and consciousness like a non-localized soul, what else can we be but ultra-complex "meat computers"?

[–] [email protected] 38 points 1 year ago* (last edited 1 year ago) (23 children)

yeah this is knee-jerk anti-technology shite from people here because we live in a society organized along lines where creation of AI would lead to our oppression instead of our liberation. of course making a computer be sentient is possible, to believe otherwise is to engage in magical (chauvinistic?) thinking about what constitutes consciousness.

When I watched blade runner 2049 I thought the human police captain character telling the Officer K (replicant) character she was different from him because she had a soul a bit weird, since sci-fi settings are pretty secular. Turns out this was prophetic and people are more than willing to get all spiritual if it helps them invent reasons to differentiate themselves from the Other.

load more comments (23 replies)
load more comments (18 replies)
load more comments (2 replies)
[–] [email protected] 29 points 1 year ago (52 children)

I don't know where everyone is getting these in depth understandings of how and when sentience arises. To me, it seems plausible that simply increasing processing power for a sufficiently general algorithm produces sentience. I don't believe in a soul, or that organic matter has special properties that allows sentience to arise.

I could maybe get behind the idea that LLMs can't be sentient, but you generalized to all algorithms. As if human thought is somehow qualitatively different than a sufficiently advanced algorithm.

Even if we find the limit to LLMs and figure out that sentience can't arise (I don't know how this would be proven, but let's say it was), you'd still somehow have to prove that algorithms can't produce sentience, and that only the magical fairy dust in our souls produce sentience.

That's not something that I've bought into yet.

[–] [email protected] 46 points 1 year ago* (last edited 1 year ago) (53 children)

so i know a lot of other users will just be dismissive but i like to hone my critical thinking skills, and most people are completely unfamiliar with these advanced concepts, so here's my philosophical examination of the issue.

the thing is, we don't even know how to prove HUMANS are sentient except by self-reports of our internal subjective experiences.

so sentience/consciousness as i discuss it here refers primarily to Qualia, or to a being existing in such a state as to experience Qualia. Qualia are the internal, subjective, mental experiences of external, physical phenomena.

here's the task of people that want to prove that the human brain is a meat computer: Explain, in exact detail, how (i.e. the procsses by which) Qualia, (i.e. internal, subjective, mental experiences) arise from external, objective, physical phenomena.

hint: you can't. the move by physicalist philosophy is simply to deny the existence of qualia, consciousness, and subjective experience altogether as 'illusory' - but illusory to what? an illusion necessarily has an audience, something it is fooling or decieving. this 'something' would be the 'consciousness' or 'sentience' or to put it in your oh so smug terms the 'soul' that non-physicalist philosophy might posit. this move by physicalists is therefore syntactically absurd and merely moves the goalpost from 'what are qualia' to 'what are those illusory, deceitful qualia decieving'. consciousness/sentience/qualia are distinctly not information processing phenomena, they are entirely superfluous to information processing tasks. sentience/consciousness/Qualia is/are not the information processing, but internal, subjective, mental awareness and experience of some of these information processing tasks.

Consider information processing, and the kinds of information processing that our brains/minds are capable of.

What about information processing requires an internal, subjective, mental experience? Nothing at all. An information processing system could hypothetically manage all of the tasks of a human's normal activities (moving, eating, speaking, planning, etc.) flawlessly, without having such an internal, subjective, mental experience. (this hypothetical kind of person with no internal experiences is where the term 'philosophical zombie' comes from) There is no reason to assume that an information processing system that contains information about itself would have to be 'aware' of this information in a conscious sense of having an internal, subjective, mental experience of the information, like how a calculator or computer is assumed to perform information processing without any internal subjective mental experiences of its own (independently of the human operators).

and yet, humans (and likely other kinds of life) do have these strange internal subjective mental phenomena anyway.

our science has yet to figure out how or why this is, and the usual neuroscience task of merely correlating internal experiences to external brain activity measurements will fundamentally and definitionally never be able to prove causation, even hypothetically.

so the options we are left with in terms of conclusions to draw are:

  1. all matter contains some kind of (inhuman) sentience, including computers, that can sometimes coalesce into human-like sentience when in certain configurations (animism)
  2. nothing is truly sentient whatsoever and our self reports otherwise are to be ignored and disregarded (self-denying mechanistic physicalist zen nihilism)
  3. there is something special or unique or not entirely understood about biological life (at least human life if not all life with a central nervous system) that produces sentience/consciousness/Qualia ('soul'-ism as you might put it, but no 'soul' is required for this conclusion, it could just as easily be termed 'mystery-ism' or 'unknown-ism')

And personally the only option i have any disdain for is number 2, as i cannot bring myself to deny the very thing i am constantly and completely immersed inside of/identical with.

load more comments (53 replies)
load more comments (51 replies)
load more comments (7 replies)
[–] [email protected] 69 points 1 year ago* (last edited 1 year ago) (3 children)

They switched from worshiping Elon Musk to worshiping ChatGPT. There are literally people commenting ChatGPT responses to prompt posts asking for real opinions, and then getting super defensive when they get downvoted and people point out that they didn't come here to read shit from AI.

[–] [email protected] 51 points 1 year ago (2 children)

I've seen this several times now; they're treating the word-generating parrot like fucking Shalmaneser in Stand on Zanzibar, you literally see redd*tors posting comments that are basically "I asked ChatGPT what it thought about it and here...".

Like it has remotely any value. It's pathetic.

[–] [email protected] 32 points 1 year ago (1 children)

They simply have to denigrate living human brains so their treat printers seem more elevated. More special. cringe

[–] [email protected] 35 points 1 year ago (9 children)

One of them also cited fucking Blade Runner.

“You’re mocking people who think AI is sentient, but here’s a made up story where it really is sentient! You’d look really stupid if you continued to deny the sentience of AI in this scenario I just made up. Stories cannot be anything but literal. Blade Runner is a literal prediction of the future.”

Wow, if things were different they would be different!

[–] [email protected] 28 points 1 year ago (11 children)

You are all superstitious barbarians, whereas I get my logical rational tech prophecies from my treats smuglord

load more comments (11 replies)
load more comments (8 replies)
load more comments (1 replies)
[–] [email protected] 39 points 1 year ago (1 children)

They switched from worshiping Elon Musk to worshiping ChatGPT.

Some worship both now. Look at this euphoric computer toucher:

https://hexbear.net/comment/4293298

Bots already move packages, assemble machines, and update inventory.

ChatGPT could give you a summary of the entire production process. It can replace customer service agents, and support for shopping is coming soon.

Tesla revealed a robot with thumbs. They will absolutely try to replace workers with those bots, including workers at the factory that produces those bots.

Ignoring that because your gut tells you humans are special, and always beat the machines in the movies just means you will be blindsided when Tesla fights unioning workers with these bots. They'll use them to scab the UAWs attempts to get in, and will be working hard to get the humans at the bot factories replaced with the same bots coming out.

[–] [email protected] 37 points 1 year ago (3 children)

ChatGPT could give you a summary of the entire production process

with entirely made up numbers

It can replace customer service agents

that will direct you to a non-existent department because some companies in the training data have one

and support for shopping is coming soon

i look forward to ordering socks and receiving ten AA batteries, three identical cheesegraters, and a leopard

load more comments (3 replies)
[–] [email protected] 29 points 1 year ago (1 children)

They're in this thread, too. The very same "look at this hockey stick shaped curve of how awesome the treat printer is. The awesomeness will exponentially rise until the nerd rapture sweeps me away, you superstitious Luddite meat computers" euphoria.

load more comments (1 replies)
[–] [email protected] 58 points 1 year ago (164 children)

I said it at the time when chatGPT came along, and I'll say it now and keep saying it until or unless the android army is built which executes me:

ChatGPT kinda sucks shit. AI is NO WHERE NEAR what we all (used to?) understand AI to be ie fully sentient, human-equal or better, autonomous, thinking, beings.

I know the Elons and shit have tried (perhaps successfully) to change the meaning of AI to shit like chatGPT. But, no, I reject that then, now, and forever. Perhaps people have some "real" argument for different types and stages of AI and my only preemptive response to them is basically "keep your industry specific terminology inside your specific industries." The outside world, normal people, understand AI to be Data from Star Trek or the Terminator. Not a fucking glorified Wikipedia prompt. I think this does need to be straight forwardly stated and their statements rejected because... Frankly, they're full of shit and it's annoying.

[–] [email protected] 40 points 1 year ago (3 children)

The LLM marketing hype campaign has very successfully changed the overall perceived definition of what "AI" is and what "AI" could be.

Arguably it makes actual general AI as a concept harder to develop because financing and subsidies will likely keep going downstream toward LLM projects instead of attempts to emulate general intelligence.

load more comments (3 replies)
load more comments (163 replies)
[–] [email protected] 48 points 1 year ago (28 children)

I'm not really a computer guy but I understand the fundamentals of how they function and sentience just isn't really in the cards here.

[–] [email protected] 32 points 1 year ago (10 children)

I feel like only silicon valley techbros think they understand consciousness and do not realize how reductive and stupid they sound

load more comments (10 replies)
load more comments (27 replies)
[–] [email protected] 48 points 1 year ago (2 children)

Roko's Basilisk, but it's the snake from the Nokia dumb phone game.

load more comments (2 replies)
[–] [email protected] 44 points 1 year ago (1 children)

I was gonna say, "Remember when scientists thought testing a nuclear bomb might start a chain reaction enflaming the whole atmosphere and then did it anyway?" But then I looked it up and I guess they actually did calculations and figured out it wouldn't before they did the test.

[–] [email protected] 28 points 1 year ago (1 children)

Might have been better if it did pika-cousin-suffering

No I’m not serious I don’t need the eco-fascism primer thank you very much

load more comments (1 replies)
[–] [email protected] 41 points 1 year ago (9 children)

I don't know if Reddit was always like this but all /r/ subreddits feel extremely astroturfed. /r/liverpoolfc for example feels like it is run by the teams PR division. There are a handful of criticcal posts sprinkled in so redditors can continue to delude themselves into believing they are free thinking individuals.

Also this superintelligent thing was doing well on some fifth grade level tests according to Reuter's anonymous source which got OpenAI geniuses worried about AI apocalypse.

load more comments (9 replies)
[–] [email protected] 41 points 1 year ago (2 children)

The half serious jokes about sentient AI, made by dumb animals on reddit are no closer to the mark than an attempt to piss on the sun. AI can't be advancing at a pace greater than we think, unless we think it's not advancing at all. There is no god damn AI. It's a language model that uses a stochastic calculation to print out the next word each time. It barely holds on to a few variables at a time, it's got no grasp on anything, no comprehension, let alone a promise of sentience.

There are plenty of stuff and people that get to me, but few are as good at it as idiot tech bros, their delusions and their extremely warped perspective.

load more comments (2 replies)
[–] [email protected] 40 points 1 year ago* (last edited 1 year ago) (3 children)

I think it should be noted, that some of the members on the board of OpenAI are literally just techno-priests doing actual techno-evangelism, their job literally depends on this new god and the upcoming techno-rapture being perceived as at least a plausible item of faith. I mean it probably works as any other marketing strategy, but this is all in the context of Microsoft becoming the single largest company stakeholder on OpenAI, likely they don't want their money to go to waste paying a bunch of useless cultists so they started yanking Sam Altman's chain. The OpenAI board reacted to the possibility of Microsoft making budget calls, and outed Altman and Microsoft swiftly reacted by formally hiring Altman and doubling down. Obviously most employees are going to side with Microsoft since they're currently paying the bills. You're going to see people strongly encouraged to walk out from the OpenAI board in the upcoming weeks or months, and they'll go down screaming crap about the computer hypergod. You see these aren't even marketing lines that they're repeating acritically, it's what's some dude desperately latching onto their useless 6 figure job is screaming.

load more comments (3 replies)
[–] [email protected] 38 points 1 year ago (12 children)

The saddest part of all is that it looks like they really are wishing for real life to imitate a futuristic sci-fi movie. They might not come out and say, "I really hope AI in the real world turns out to be just like in a sci-fi/horror movie" but that's what it seems like they're unconsciously wishing for. It's just like a lot of other media phenomena, such as real news reporting on zombie apocalypse preparedness or UFOs. They may phrase it as "expectation" but that's very adjacent to "hopeful."

load more comments (12 replies)
[–] [email protected] 36 points 1 year ago (1 children)

achieved a breakthrough in mathematics

The bot put numbers in a statistically-likely sequence.

[–] [email protected] 31 points 1 year ago (2 children)

I swear 99% of reddit libs so-true don't understand anything about how LLMs work.

load more comments (2 replies)
[–] [email protected] 33 points 1 year ago

New Q* drop lol

[–] [email protected] 30 points 1 year ago (1 children)

Some graph traversal algorithm ass name.

load more comments (1 replies)
load more comments
view more: next ›