this post was submitted on 12 Sep 2023
151 points (100.0% liked)

Technology

37716 readers
327 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 9 points 1 year ago* (last edited 1 year ago) (1 children)

What else should they be?? They reflect human language.

[–] [email protected] 8 points 1 year ago (2 children)

People think they are actually intelligent and perform reasoning. This article discusses how and why that is not true.

[–] [email protected] 2 points 1 year ago (1 children)

People think they are actually intelligent and perform reasoning.

They do both. The articles fails to successfully argue that point and just turns AIs failure to answer an irrelevant trivia question into a gotcha moment.

[–] [email protected] 3 points 1 year ago (1 children)

I would encourage you to ask ChatGPT itself if it is intelligent or performs reasoning.

[–] [email protected] 3 points 1 year ago (1 children)

ChatGPT: I can perform certain types of reasoning and exhibit intelligent behavior to some extent, but it's important to clarify the limitations of my capabilities. [...] In summary, while I can perform certain forms of reasoning and exhibit intelligent behavior within the constraints of my training data, I do not possess general intelligence or the ability to think independently and creatively. My responses are based on patterns in the data I was trained on, and I cannot provide novel insights or adapt to new, unanticipated situations.

That said, this is one area where I wouldn't trust the ChatGPT one bit. It has no introspection (outside of the prompt), due to not having any long term memory. So everything it says is based on whatever marketing material OpenAI trained it with.

Either way, any reasonable conversation with the bot will show that it can reason and is intelligent. The fact that it gets stuff wrong sometimes is absolutely irrelevant, since every human does that too.

[–] [email protected] 2 points 1 year ago (1 children)

I think it's hilarious you aren't listening to anyone telling you you're wrong, even the bot itself. Must be nice to be so confident.

[–] [email protected] 3 points 1 year ago

You got to provide actual arguments, examples, failure cases, etc. Instead all I see is repetition of the same tired talking points from 9 months ago when the thing launched. It's boring and makes me seriously doubt if humans are capable of original thought.

[–] [email protected] 1 points 1 year ago

I think their creators have deliberately disconnected the runtime AI model from re-reading their own training material because it's a copyright and licensing nightmare.