this post was submitted on 05 Jun 2024
95 points (100.0% liked)
Technology
37708 readers
203 users here now
A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.
Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.
Subcommunities on Beehaw:
This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I don't think your assumption holds. Corporations are not, as a rule, incompetent - in fact, they tend to be really competent at squeezing profit out of anything. They are misaligned, which is much more dangerous.
I think the more likely scenario is also more grim:
AI actually does continue to advance and gets better and better displacing more and more jobs. It doesn't happen instantly so barely anything gets done. Some half-assed regulations are attempted but predictably end up either not doing anything, postponing the inevitable by a small amount of time, or causing more damage than doing nothing would. Corporations grow in power, build their own autonomous armies, and exert pressure on governments to leave them unregulated. Eventually all resources are managed by and for few rich assholes, while the rest of the world tries to survive without angering them.
If we're unlucky, some of those corporations end up being managed by a maximizer AGI with no human supervision and then the Earth pretty much becomes an abstract game with a scoreboard, where money (or whatever is the equivalent) is the score.
Limitations of human body act as an important balancing factor in keeping democracies from collapsing. No human can rule a nation alone - they need armies and workers. Intellectual work is especially important (unless you have some other source of income to outsource it), but it requires good living conditions to develop and sustain. Once intellectual work is automated, infrastructure like schools, roads, hospitals, housing cease to be important for the rulers - they can give those to the army as a reward and make the rest of the population do manual work. Then if manual work and policing through force become automated, there is no need even for those slivers of decency.
Once a single human can rule a nation, there is enough rich psychopaths for one of them to attempt it.
There are also other AI-related pitfalls that humanity may fall into in the meantime - automated terrorism (e.g. swarms of autonomous small drones with explosive charges using face recognition to target entire ideologies by tracking social media), misaligned AGI going rogue (e.g. the famous paperclip maximizer, although probably not exactly this scenario), collapse of the internet due to propaganda bots using next-gen generative AI... I'm sure there's more.
Ai doesn't get better. Its completely dependent on computing power. They are dumping all the power into it they can, and it sucks ass. The larger the dataset the more power it takes to search it all. Your imagination is infinite, computing power is not. you can't keep throwing electricity at a problem. It was pushed out because there was a bunch of excess computing power after crypto crashed, or semi stabilized. Its an excuse to lay off a bunch of workers after covid who were gonna get laid off anyway. Managers were like sweet I'll trim some excess employees and replace them with ai! Wrong. Its a grift. It might hang on for a while but policy experts are already looking at the amount of resources being thrown at it and getting weary. The technological ignorance you are responding to, that's you. You don't know how the economy works and you don't know how ai works so you're just believing all this roku's basilisk nonsense out of an overactive imagination. Its not an insult lots of people are falling for it, ai companies are straight up lying, the media is stretching the truth of it to the point of breaking. But I'm telling you, don't be a sucker. Until there's a breakthrough that fixes the resource consumption issue by like orders of magnitude, I wouldn't worry too much about Ellison's AM becoming a reality
I find it rather disingenuous to summarize the previous poster’s comment as a “Roko’s basilisk”scenario. Intentionally picking a ridiculous argument to characterize the other side of the debate. I think they were pretty clear about actual threats (some more plausible than others, IMO).
I also find it interesting that you so confidently state that “AI doesn’t get better,” under the assumption that our current deep learning architectures are the only way to build AI systems.
I’m going to make a pretty bold statement: AGI is inevitable, assuming human technological advancement isn’t halted altogether. Why can I so confidently state this? Because we already have GI without the A. To say that it is impossible is to me equivalent to arguing that there is something magical about the human brain that technology could never replicate. But brains aren’t magic; they’re incredibly sophisticated electrochemical machines. It is only a matter of time before we find a way to replicate “general intelligence,” whether it’s through new algorithms, new computing architectures, or even synthetic biology.
I wasn't debating you. I have debates all day with people who actually know what they're talking about, I don't come to the internet for that. I was just looking out for you, and anyone else who might fall for this. There is a hard physical limit. I'm not saying the things you're describing are technically impossible, I'm saying they are technically impossible with this version of the tech. Slapping a predictive text generator on a giant database , its too expensive, and it doesn't work. Its not a debate, its science. And not the fake shit run by corporate interests, the real thing based on math.
There's gonna be a heatwave this week in the Western US, and there are almost constant deadly heatwaves in many parts of the world from burning fossil fuels. But we can't stop producing electricity to run these scam machines because someone might lose money.
Your opening sentence is demonstrably false. GTP-2 was a shitpost generator, while GPT-4 output is hard to distinguish from a genuine human. Dall-E 3 is better than its predecessors at pretty much everything. Yes, generative AI right now is getting better mostly by feeding it more training data and making it bigger. But it keeps getting better and there's no cutoff in sight.
That you can straight-up comment "AI doesn't get better" at a tech literate sub and not be called out is honestly staggering.
I actually don't think it is because, as I alluded to in another comment in this thread, so many people are still completely in the dark on generative AI - even in general technology-themed areas of the internet. Their only understanding of it comes from reading the comments of morons (because none of these people ever actually read the linked article) who regurgitate the same old "big tech is only about hype, techbros are all charlatans from the capitalist elite" lines for karma/retweets/likes without ever actually taking the time to hear what people working within the field (i.e. experts) are saying. People underestimate the capabilities of AI because it fits their political world view, and in doing so are sitting ducks when it comes to the very real threats it poses.
The difference between gpt-3 and gpt-4 is number of parameters, I.e. processing power. I don't know what the difference between 2 and 4 is, maybe there were some algorithmic improvements. At this point, I don't know what algorithmic improvements are going to net efficiencies in the "orders of magnitude" that would be necessary to yield the kind of results to see noticeable improvement in the technology. Like the difference between 3 and 4 is millions of parameters vs billions of parameters. Is a chatgpt 5 going to have trillions of parameters? No.
Tech literate people are apparently just as susceptible to this grift, maybe more susceptible from what little I understand about behavioral economics. You can poke holes in my argument all you want, this isn't a research paper.