this post was submitted on 11 Apr 2024
1314 points (95.8% liked)

Science Memes

11243 readers
3008 users here now

Welcome to c/science_memes @ Mander.xyz!

A place for majestic STEMLORD peacocking, as well as memes about the realities of working in a lab.



Rules

  1. Don't throw mud. Behave like an intellectual and remember the human.
  2. Keep it rooted (on topic).
  3. No spam.
  4. Infographics welcome, get schooled.

This is a science community. We use the Dawkins definition of meme.



Research Committee

Other Mander Communities

Science and Research

Biology and Life Sciences

Physical Sciences

Humanities and Social Sciences

Practical and Applied Sciences

Memes

Miscellaneous

founded 2 years ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 6 points 7 months ago* (last edited 7 months ago) (3 children)

Llm's are not a step to agi. Full stop. Lovelace called this like 200 years ago. Turing and minsky called it in the 40s.

[–] [email protected] 1 points 7 months ago (1 children)

We may not even "need" AGI. The future of machine learning and robotics may well involve multiple wildly varying models working together.

LLMs are already very good at what they do (generating and parsing text and making a passable imitation of understanding it).

We already use them with other models, for example Whisper is a model that recognizes speech. You feed the output to an LLM to interpret it, use the LLM's JSON output with a traditional parser to feed a motion control system, then back to an LLM to output text to feed to one of the many TTS models so it can "tell you what it's going to do".

Put it in a humanoid shell or a Spot dog and you have a helpful robot that looks a lot like AGI to the user. Nobody needs to know that it's just 4 different machine learning algorithms in a trenchcoat.

[–] [email protected] 2 points 7 months ago (1 children)

passable imitation of understanding

Okay so there are things they're useful for, but this one in particular is fucking... Not even nonsense.

Also, the ml algos exponentiate necessary clock cycles with each one you add.

So its less a trench coat and more an entire data center

And it still can't understand; its still just sleight of hand.

[–] [email protected] -1 points 7 months ago (1 children)

And it still can't understand; its still just sleight of hand.

Yes, thus "passable imitation of understanding".

The average consumer doesn't understand tensors, weights and backprop. They haven't even heard of such things. They ask it a question, like it was a sentient AGI. It gives them an answer.

Passable imitation.

You don't need a data center except for training, either. There's no exponential term as the models are executed sequentially. You can even flush the huge LLM off your GPU when you don't actively need it.

I've already run basically this entire stack locally and integrated it with my home automation system, on a system with a 12GB Radeon and 32GB RAM. Just to see how well it would work and to impress my friends.

You yell out "$wakeword, it's cold in here. Turn up the furnace" and it can bicker with you in near-realtime about energy costs before turning it up the requested amount.

[–] [email protected] 4 points 7 months ago (2 children)

One of the engineers who wrote 'eliza' had like a deep connection to and relationship with it. Who wrote it.

Painting a face on a Spinny door will make people form a relationship with it. Not a measure of ago.

gives them an answer

'An answer' isnt hard. Magic 8 ball does that. So does a piece of paper that says "drink water, you stupid cunt" This makes me think you're arguing from commitment or identity rather than knowledge or reason. Or you just don't care about truth.

Yeah they talk to it like an agi. Or a search engine (which are a step to agi, largely crippled by llm's).

Color me skeptical of your claims in light of this.

[–] [email protected] 2 points 7 months ago* (last edited 7 months ago) (1 children)

I think it's pretty natural for people to confuse the way mechanisms of communication are used with inherent characteristics of the entity you're communicating with: "If it talks like a medical docture then surelly it's a medical doctor".

Only that's not how it works, as countless politicians, salesmen and conmen have demonstrated - no matter how much we dig down intonsubtle details, comms isn't really guaranteed to tell us all that much about the characteristics of what's on the other side - they might be just lying or simulating and there are even entire societies and social strata educated since childhood to "always present a certain kind of image" (just go read about old wealth in England) or in other words to project a fake impression of their character in the way they communicate.

All this to say that it doesn't require ill intent for somebody to go around insisting that LLMs are intelligent: many if not most people are trying to read the character of a subject from the language the subject uses (which they shouldn't but that's how humans evolved to think in social settings) so they trully belive that what produces language like an intelligent creature must be an intelligent creature.

They're probably not the right people to be opinating on cognition and inteligence, but lets not assign malice to it - at worst it's pigheaded ignorance.

[–] [email protected] 2 points 7 months ago* (last edited 7 months ago)

I think the person my previous comment was replying to wasnt malicious; I think they're really invested, financially or emotionally, in this bullshit, to the point their critical thinking is compromised. Different thing.

Odd loop backs there.

[–] [email protected] 0 points 7 months ago (1 children)

I think you're misreading the point I'm trying to make. I'm not arguing that LLM is AGI or that it can understand anything.

I'm just questioning what the true use case of AGI would be that can't be achieved by existing expert systems, real humans, or a combination of both.

Sure Deepseek or Copilot won't answer your legal questions. But neither will a real programmer. Nor will a lawyer be any good at writing code.

However when the appropriate LLMs with the appropriate augmentations can be used to write code or legal contracts under human supervision, isn't that good enough? Do we really need to develop a true human level intelligence when we already have 8 billion of those looking for something to do?

AGI is a fun theoretical concept, but I really don't see the practical need for a "next step" past the point of expanding and refining our current deep learning models, or how it would improve our world.

[–] [email protected] 1 points 7 months ago

Those are not meaningful use cases for llm's.

And they're getting worse at even faking it now.

[–] [email protected] 0 points 7 months ago (2 children)

Pray tell, when did we achieve AGI so that you can say this with such conviction? Oh, wait, we didn't - therefore the path there is still unknown.

[–] [email protected] 7 points 7 months ago* (last edited 7 months ago)

Okay, this is no more a step to AGI than the publication of 'blindsight' or me adding tamarind paste to sweeten my tea.

The project isn't finished, but we know basic stuff. And yeah, sometimes history is weird, sometimes the enlightenment happens because of oblivious assholes having bad opinions about butter and some dude named 'le rat' humiliating some assholes in debates.

But llm's are not a step to AGI. They're just not. They do nothing intelligence does that we couldn't already do. Youre doing pareidola. Projecting shit.

[–] [email protected] -1 points 7 months ago

When the Jewish made their first mud golem ages ago?

[–] [email protected] -2 points 7 months ago (1 children)

To create general AI, we first need a way for computers to communicate proficiently with humans.

LLMs are just that.

[–] [email protected] 5 points 7 months ago (1 children)

Its not though. It's autocorrect. It is not communication. It's literally autocorrect.

[–] [email protected] -3 points 7 months ago (1 children)

That is not an argument. Let me demonstrate:

Humans can't communicate. They are meat. They are not communicating. It's literally meat.

[–] [email protected] 3 points 7 months ago* (last edited 7 months ago) (1 children)

Spanish is not English. Its spanish.

A lot of people are really emotionally invested in this tool being a lot of things it's not. I think because its kind of the last gasp of pretending capitalism can give us something that isnt shit, the last thing that came out before the end enshitification spiral tightened, nevermind the fact that its largely a cause of that, and I don't think any of you can be critical or clear headed here.

I'm afraid we're too obsessed with it being the bullshit SciFi toy it isnt that we'll ignore its real use cases, or worse; apply it to its real use cases, completely misunderstand what its doing, and adeptus mechanics our way into getting so fucking many people killed/maimed-those uses are mostly medicine adjacent.

[–] [email protected] 0 points 7 months ago (1 children)

I was just pointing out that your emotional plea, that this technology is just autocorrect is not an argument in any way.

For it to be one you need to explicitly state the implication of that fact. Yes architecturaly it is autocomplete but that does not obviously imply anything. What is it about autocomplete that barrs a system of the ability to understand?

Humans are made of meat but that does not imply they can't speak or think.

[–] [email protected] 1 points 7 months ago* (last edited 7 months ago)

If I said 'this is just a spoon' you'd know what I meant. This is not an emotional appeal.

I'm not saying computers can't ever think. I'm saying this is just autocorrect, fancy version of the shit I'm using to type this.

Autocorrect is not understanding, and if you don't understand that, you have zero understanding of either tech or philosophy. This topic is about both, so you really shouldn't be making assertions. Stick to genuine questions.