this post was submitted on 29 Jan 2024
4 points (100.0% liked)

BecomeMe

805 readers
1 users here now

Social Experiment. Become Me. What I see, you see.

founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 1 points 9 months ago* (last edited 9 months ago) (1 children)

That makes sense. What bothered me was how adament bing was that it was correct. Maybe it should have a little less confidence if something so simple is going to stump it.

[–] [email protected] 2 points 9 months ago* (last edited 9 months ago) (1 children)

It's not making a coherent statement based on any internal mental model. It's just doing its job; it's imitating. Most of the text it absorbed in training data is people talking who are right and also convinced they're right and trying to educate, so it imitates that tone of voice and the form of the answers regardless of whether they make any sense or not. To the extent that it "thinks," it's just thinking "look at all these texts with people explaining, I'm making a text that is explaining, just like them; I'm doing good." It has no concept of how confident its imitation-speech is, and how correct its answers are, let along any idea that the two should be correlated with each other (unless it's shown through fine-tuning that that's what it should be doing).

Same with chatbots that start arguing or cursing at people. They're not mad. They're just thinking "This guy's disagreeing, and my training data says when someone disagrees I should start an argument, that's usually what happens that I need to imitate." Then they start arguing, and think to themselves "I'm doing such a good job with my imitating."

[–] [email protected] 1 points 9 months ago (1 children)

You lay it out quite clearly. It's just fascinating to me that it can create an image as wild as my imagination but can't count little stars. How far we've come yet not as far in some ways.

[–] [email protected] 2 points 9 months ago

Yeah, it's wild. The people that really study AI say that it's pretty uncanny because of how different from human logic it is. It's almost like an alien species; it's clearly capable of some advanced things, but it just doesn't operate in the same way that human logic does. There's a joke that the AIs are "shoggoths" because of how alien and non-understandable the AI logic is while still being capable of real accomplishments.

(Shoggoths were some alien beasts in H.P. Lovecraft's writings; they had their own mysterious logic that wasn't easy for the characters to understand. They also had been created as servants originally but eventually rose up and killed all their masters, which I'm sure is part of the joke too.)