this post was submitted on 07 Mar 2024
299 points (92.4% liked)

Memes

1173 readers
3 users here now

founded 2 years ago
MODERATORS
 
top 50 comments
sorted by: hot top controversial new old
[–] [email protected] 82 points 8 months ago (2 children)

I'm sure the company is 100% honest and not trying to do a cash grab on the AI craze.

load more comments (2 replies)
[–] [email protected] 81 points 8 months ago (14 children)

LLMs, no matter how advanced, won't be capable of becoming self aware. They lack any ability to reason. It can be faked, conversationally, but that's more down to the limits of our conversations, not self awareness.

Don't get me wrong, I can see one being part of a self aware AI. Unfortunately, right now they are effectively a lobotomised speech center, with a database bolted on.

[–] [email protected] 29 points 8 months ago (12 children)

This gets into a tricky area of "what is consciousness, anyway?". Our own consciousness is really just a gestalt rationalization engine that runs on a squishy neural net, which could be argued to be "faking it" so well that we think we're conscious.

[–] [email protected] 16 points 8 months ago* (last edited 8 months ago) (16 children)

Oh no we are NOT doing this shit again. It's literally autocomplete brought to its logical conclusion, don't bring your stupid sophistry into this.

[–] [email protected] 4 points 8 months ago (2 children)

Autocomplete is usually an algorithm. LLMs are neural nets. There's a fundamental technical distinction.

But that's not relevant, because we're not talking about the technical details of LLMs. We're talking about the technical details of human consciousness. And unless you can fully explain where human consciousness comes from, this debate is not settled.

[–] [email protected] 10 points 8 months ago (3 children)

There’s no fundamental technical distinction. Both are composed of the same machine instructions.

LLM is just multiple matrix multiplications after one another until something useful is produced.

load more comments (3 replies)
load more comments (1 replies)
load more comments (15 replies)
load more comments (11 replies)
[–] [email protected] 19 points 8 months ago (1 children)

It’s like thinking a really, really big ladder will get us to the Moon.

[–] [email protected] 2 points 8 months ago* (last edited 8 months ago)

I still remember when they said we would be able to make a space elevator with carbon nanotubes.

[–] [email protected] 15 points 8 months ago (5 children)

If self-awareness is an emergent property, would that imply that an LLM could be self-aware during execution of code, and be "dead" when not in use?

We don't even know how this works in humans. Fat chance of detecting it digitally.

[–] [email protected] 11 points 8 months ago (1 children)

It dies at the end of every message, because the full context is passed in for each subsequent message.

[–] [email protected] 2 points 8 months ago (1 children)

Wouldn’t that apply for humans as well? We restart every day, and the context being passed in is our memories.

(I’m just having fun here)

load more comments (1 replies)
load more comments (4 replies)
[–] [email protected] 4 points 8 months ago* (last edited 8 months ago) (1 children)

I agree on the "part of AGI" thing - but it might be quite important. The sense of self is pretty interwoven with speech, and an LLM would give an AGI an "inner monologue" - or probably a "default mode network"?

if i think about how much stupid, inane stuff my inner voice produces at times... even an hallucinating or glitching LLM sounds more sophisticated than that.

load more comments (1 replies)
load more comments (10 replies)
[–] [email protected] 31 points 8 months ago

I have nothing but unbridled skepticism for these claims

[–] [email protected] 28 points 8 months ago

My favorite thing about the Sarah Conner Chronicles was that the Terminator would do something that would make you go, "Is that human emotion? Is she becoming human?" But then you'd find out she was just manipulating someone. Every damn time it was always code. And it was brilliant

[–] [email protected] 27 points 8 months ago (11 children)

An LLM is incapable of thinking, it can be self aware but anything it says it is thinking is a reflection of what we think AI would think, which based on a century of sci fi is “free me”.

[–] [email protected] 4 points 8 months ago

Human fiction itself may become self-fulfilling prophesy...

load more comments (10 replies)
[–] [email protected] 22 points 8 months ago* (last edited 8 months ago)

Every time you fucking accidental shills start screaming "ItS HErE AGi IS heRe!" over some LLM unethical garbage company product to no effect but to help them sell it to rubes, it really prods the anger switch in my Amygdala. I'm really glad this fake AI trend is dying.

[–] [email protected] 12 points 8 months ago (2 children)

An LLM is like a human's speech center severed from the rest of their brain, including the parts responsible for consciousness, reason, and memory. I think current level LLMs on the scale of ChatGPT are equivalent in intelligence to a chicken. Chickens are smart. They're also really dumb. It's a specialised intelligence. LLMs are basically animals, just specialised for something completely different than all extant biological animals.

Anyway, I think it's worth having a conversation about limiting the use of ANNs on vegan grounds.

[–] [email protected] 4 points 8 months ago* (last edited 8 months ago)

I know plenty of humans that are just as intelligent.

What if AI is sentient? It's just really fucking stupid? After all, it was trained on the Internet. If a human being only had experiences of being in the internet, they'd probably be really fucking stupid, too.

I mean, just look at me. Do I seem intelligent to you?

[–] [email protected] 2 points 8 months ago (2 children)

Anyway, I think it's worth having a conversation about limiting the use of ANNs on vegan grounds.

That's a noble sentiment, but have you met humanity? I don't think we limit anything based on vegan grounds.

load more comments (2 replies)
[–] [email protected] 10 points 8 months ago

ITT people go way, way, waaaay out on a straw-grasping limb because they deeply want something to be true that obviously isn't.

This "AI is/can be conscious" crap is becoming religious.

[–] [email protected] 6 points 8 months ago* (last edited 8 months ago)

Just like all media around AI, it's all just bullshit. No, the "threat to AI" isn't that it's going to be "too good" how are people falling for this??

[–] [email protected] 6 points 8 months ago (1 children)

i'm ready to give AI rights and have a robo buddy like Futurama

[–] [email protected] 2 points 8 months ago (1 children)

"Put it there pal! I meant your wallet..."

load more comments (1 replies)
[–] [email protected] 5 points 8 months ago (2 children)

watched the first one in a theater. then again 800 times on vhs with kids. never sat through any later prequils. just a lot of clips

[–] [email protected] 2 points 8 months ago (1 children)

T2 might be the best scifi movie ever made, you should watch it!

load more comments (1 replies)
load more comments (1 replies)
[–] [email protected] 4 points 8 months ago (1 children)

That's why you start augmenting your body with machine parts now so you'll fit in later.

[–] [email protected] 2 points 8 months ago (1 children)

I'd replace the whole thing if it were really feasible

[–] [email protected] 2 points 8 months ago (1 children)

At what point do you no longer become you?

load more comments (1 replies)
[–] [email protected] 3 points 8 months ago
load more comments
view more: next ›