92
this post was submitted on 28 Oct 2023
92 points (92.6% liked)
Technology
59647 readers
3545 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Yeah, I think a lot of people don't realize that they've neutered the shit out of ChatGPT. They don't want it to be vividly humanlike, just mostly so. The main corporate interest in chat AI is profit, i.e. how can they eliminate various repetitive jobs that haven't been automated yet because they require someone with pseudo-humanlike behavior. Examples would include stuff like help desks, customer services, knowledge managers, certain types of assistants, legal aids, etc. That's their end game.
Remember, many of these companies diving into this field are very wary of their tech coming off as too realistic. Some of us may be excited at the prospect of AGI becoming a reality, but I would bet the majority of society (and likely many governments) would instantly turn on such tech. Even though LLMs are far, far, faaaar away from achieving AGI, the fact that they already freak many folks out with their current limitations proves my point.
Anyway, sorry for the rant. TLDR, OpenAI intentionally keeps ChatGPT's conversational abilities relatively simple/efficient since they're focused more on it seeming human enough to get the job(s) done.