this post was submitted on 13 Aug 2023
1079 points (96.1% liked)
Technology
59708 readers
2027 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I read an article once about how when humans hear that someone has died, the first thing they try and do is come up with a reason that whatever befell the deceased would not happen to them. Some of the time there was a logical reason, some of the time there's not, but either way the person would latch onto the reason to believe they were safe. I think we're seeing the same thing here with AI. People are seeing a small percentage of people lose their job, with a technology that 95% of the world or more didn't believe was possible a couple years ago, and they're searching for reasons to believe that they're going to be fine, and then latching onto them.
I worked at a newspaper when the internet was growing. I saw the same thing with the entire organization. So much of the staff believed the internet was a fad. This belief did not work out for them. They were a giant, and they were gone within 10 years. I'm not saying we aren't in an AI bubble now, but, there are now several orders of magnitude more money in the internet now than there was during the Dot Com bubble, just because it's a bubble doesn't mean it wont eventually consume everything.
The thing is, after enough digging you understand that LLMs are nowhere near as smart or as advanced as most people make them to be. Sure, they can be super useful and sure, they're good enough to replace a bunch of human jobs, but rather than being the AI "once thought impossible" they're just digital parrots that make a credible impersonation of it. The real AI, now renamed AGI, is still very far.
The idea and name of AGI is not new, and AI has not been used to refer to AGI since perhaps the very earliest days of AI research when no one knew how hard it actually was. I would argue that we are back in those time though since despite learning so much over the years we have no idea how hard AGI is going to be. As of right now, the correct answer to how far away is AGI can only be I don't know.
I am not sure they have to reach AGI to replace almost everyone. The amount of investment in them is now higher than it has ever been. Things are, and honestly have been, going quick. No, they are not as advanced as some people make them out to be, but I also don’t think the next steps are as nebulously difficult as some want to believe. But I would love it if you save this comment and come back in 5 years and laugh at me, I will probably be pretty relieved as well