this post was submitted on 29 Jul 2023
196 points (100.0% liked)

Technology

37717 readers
382 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 1 points 1 year ago* (last edited 1 year ago)

Perhaps I should rephrase the argument as Searle did. He didn't actually discuss "abstract understanding", instead he made a distinction between "syntax" and "semantics". And he claimed that computers as we know them cannot have semantics, whereas humans can (even if we don't all have the same semantics).

Now consider a quadratic expression. If you want to solve it, you can insert the coefficients into the quadratic formula. There are other ways to solve it, but this will always give you the right answer.

If you remember your algebra class, you will recognize that the quadratic formula isn't just some random equation to compute. You use it with intention, because the answer is semantically meaningful. It describes things like cars accelerating or apples falling.

You can teach a three year old to identify the coefficients, you can show them the symbols that make up the quadratic formula: "-", second number, "+", "√", "(", etc. And you can teach them to copy those symbols into a calculator in order. So a three year old could probably solve a quadratic expression. But they almost certainly have no idea why they are doing what they are doing. It's just a series of symbols that they were told to copy into a calculator, their only intention was to copy them in order correctly. There are no semantics behind the equation.

For that matter, a three year old could equally well enter the symbols necessary to calculate relativistic time dilation, which is an even shorter equation. But if their parents proudly told you that their toddler can solve problems in special relativity, you might think, "Yes... but not really."

That three year old is every computer program. Sure, an AI can enter symbols into a calculator and report the answer. If you tell them to enter a different series of symbols, they will report a different answer. You can tell the AI that one answer scores 0.1 and another scores 0.8, and to calculate a different equation that is based partly on those scores. But to the AI, those scores and equations have no semantic meaning. At some point those scores might stop increasing, and you will declare that the AI is "trained". But at no point does the AI assign any semantic content behind those symbols or scores. It is pure syntax.