this post was submitted on 19 Mar 2025
853 points (98.0% liked)

Technology

67535 readers
5084 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
top 50 comments
sorted by: hot top controversial new old
[–] [email protected] 1 points 4 days ago

I'm a software developer and I know that AI is just the shiny new toy from which everyone uses the buzzword to generate investment revenue.

99% of the crap people use it for us worthless. It's just a hammer and everything is a nail.

It's just like "the cloud" was 10 years ago. Now everyone is back-pedaling from that because it didn't turn out to be the panacea that was promised.

[–] [email protected] 7 points 6 days ago

Misleading title. From the article,

Asked whether "scaling up" current AI approaches could lead to achieving artificial general intelligence (AGI), or a general purpose AI that matches or surpasses human cognition, an overwhelming 76 percent of respondents said it was "unlikely" or "very unlikely" to succeed.

In no way does this imply that the "industry is pouring billions into a dead end". AGI isn't even needed for industry applications, just implementing current-level agentic systems will be more than enough to have massive industrial impact.

[–] [email protected] 4 points 6 days ago (1 children)

LLMs are good for learning, brainstorming, and mundane writing tasks.

[–] [email protected] 2 points 6 days ago (1 children)

Yes, and maybe finding information right in front of them, and nothing more

[–] [email protected] 2 points 6 days ago

Analyzing text from a different point of view than your own. I call that "synthetic second opinion"

[–] [email protected] 2 points 6 days ago

I went to CES this year and I sat on a few ai panels. This is actually not far off. Some said yah this is right but multiple panels I went to said that this is a dead end, and while usefull they are starting down different paths.

Its not bad, just we are finding it's nor great.

[–] [email protected] 287 points 1 week ago (2 children)
[–] [email protected] 85 points 1 week ago* (last edited 1 week ago) (5 children)

I like my project manager, they find me work, ask how I'm doing and talk straight.

It's when the CEO/CTO/CFO speaks where my eyes glaze over, my mouth sags, and I bounce my neck at prompted intervals as my brain retreats into itself as it frantically tosses words and phrases into the meaning grinder and cranks the wheel, only for nothing to come out of it time and time again.

[–] [email protected] 30 points 1 week ago* (last edited 1 week ago) (2 children)

COs are corporate politicians, media trained to only say things which are completely unrevealing and lacking of any substance.

This is by design so that sensitive information is centrally controlled, leaks are difficult, and sudden changes in direction cause the minimum amount of whiplash to ICs as possible.

I have the same reaction as you, but the system is working as intended. Better to just shut it out as you described and use the time to think about that issue you're having on a personal project or what toy to buy for your cat's birthday.

load more comments (2 replies)
load more comments (4 replies)
[–] [email protected] 113 points 1 week ago (5 children)

Optimizing AI performance by “scaling” is lazy and wasteful.

Reminds me of back in the early 2000s when someone would say don’t worry about performance, GHz will always go up.

[–] [email protected] 27 points 1 week ago (1 children)

Thing is, same as with GHz, you have to do it as much as you can until the gains get too small. You do that, then you move on to the next optimization. Like ai has and is now optimizing test time compute, token quality, and other areas.

load more comments (1 replies)
load more comments (3 replies)
[–] [email protected] 92 points 1 week ago (36 children)

They're throwing billions upon billions into a technology with extremely limited use cases and a novelty, at best. My god, even drones fared better in the long run.

[–] [email protected] 78 points 1 week ago (2 children)

I mean it's pretty clear they're desperate to cut human workers out of the picture so they don't have to pay employees that need things like emotional support, food, and sleep.

They want a workslave that never demands better conditions, that's it. That's the play. Period.

[–] [email protected] 31 points 1 week ago* (last edited 1 week ago) (3 children)

If this is their way of making AI, with brute forcing the technology without innovation, AI will probably cost more for these companies to maintain infrastructure than just hiring people. These AI companies are already not making a lot of money for how much they cost to maintain. And unless they charge companies millions of dollars just to be able to use their services they will never make a profit. And since companies are trying to use AI to replace the millions they spend on employees it seems kinda pointless if they aren't willing to prioritize efficiency.

It's basically the same argument they have with people. They don't wanna treat people like actual humans because it costs too much, yet letting them love happy lives makes them more efficient workers. Whereas now they don't want to spend money to make AI more efficient, yet increasing efficiency would make them less expensive to run. It's the never ending cycle of cutting corners only to eventually make less money than you would have if you did things the right way.

[–] [email protected] 31 points 1 week ago* (last edited 1 week ago)

Absolutely. It's maddening that I've had to go from "maybe we should make society better somewhat" in my twenties to "if we're gonna do capitalism, can we do it how it actually works instead of doing it stupid?" in my forties.

load more comments (2 replies)
[–] [email protected] 15 points 1 week ago (3 children)

And the tragedy of the whole situation is that they can‘t win because if every worker is replaced by an algorithm or a robot then who‘s going to buy your products? Nobody has money because nobody has a job. And so the economy will shift to producing war machines that fight each other for territory to build more war machine factories until you can’t expand anymore for one reason or another. Then the entire system will collapse like the Roman Empire and we start from scratch.

load more comments (3 replies)
load more comments (35 replies)
[–] [email protected] 78 points 1 week ago* (last edited 1 week ago) (3 children)

It's ironic how conservative the spending actually is.

Awesome ML papers and ideas come out every week. Low power training/inference optimizations, fundamental changes in the math like bitnet, new attention mechanisms, cool tools to make models more controllable and steerable and grounded. This is all getting funded, right?

No.

Universities and such are seeding and putting out all this research, but the big model trainers holding the purse strings/GPU clusters are not using them. They just keep releasing very similar, mostly bog standard transformers models over and over again, bar a tiny expense for a little experiment here and there. In other words, it’s full corporate: tiny, guaranteed incremental improvements without changing much, and no sharing with each other. It’s hilariously inefficient. And it relies on lies and jawboning from people like Sam Altman.

Deepseek is what happens when a company is smart but resource constrained. An order of magnitude more efficient, and even their architecture was very conservative.

[–] [email protected] 1 points 6 days ago (1 children)

Good ideas are dime a dozen. Implementation is the game.

Universities may churn out great papers, but what matters is how well they can implement them. Private entities win at implementation.

[–] [email protected] 1 points 6 days ago

The corporate implementations are mostly crap though. With a few exceptions.

What’s needed is better “glue” in the middle. Larger entities integrating ideas from a bunch of standalone papers, out in the open, so they actually work together instead of mostly fading out of memory while the big implementations never even know they existed.

load more comments (2 replies)
[–] [email protected] 72 points 1 week ago (10 children)

The actual survey result:

Asked whether "scaling up" current AI approaches could lead to achieving artificial general intelligence (AGI), or a general purpose AI that matches or surpasses human cognition, an overwhelming 76 percent of respondents said it was "unlikely" or "very unlikely" to succeed. 

So they're not saying the entire industry is a dead end, or even that the newest phase is. They're just saying they don't think this current technology will make AGI when scaled. I think most people agree, including the investors pouring billions into this. They arent betting this will turn to agi, they're betting that they have some application for the current ai. Are some of those applications dead ends, most definitely, are some of them revolutionary, maybe

Thus would be like asking a researcher in the 90s that if they scaled up the bandwidth and computing power of the average internet user would we see a vastly connected media sharing network, they'd probably say no. It took more than a decade of software, cultural and societal development to discover the applications for the internet.

[–] [email protected] 18 points 1 week ago (1 children)

It's becoming clear from the data that more error correction needs exponentially more data. I suspect that pretty soon we will realize that what's been built is a glorified homework cheater and a better search engine.

[–] [email protected] 33 points 1 week ago

what's been built is a glorified homework cheater and an ~~better~~ unreliable search engine.

load more comments (9 replies)
[–] [email protected] 59 points 1 week ago* (last edited 1 week ago) (3 children)

Technology in most cases progresses on a logarithmic scale when innovation isn't prioritized. We've basically reached the plateau of what LLMs can currently do without a breakthrough. They could absorb all the information on the internet and not even come close to what they say it is. These days we're in the "bells and whistles" phase where they add unnecessary bullshit to make it seem new like adding 5 cameras to a phone or adding touchscreens to cars. Things that make something seem fancy by slapping buzzwords and features nobody needs without needing to actually change anything but bump up the price.

load more comments (3 replies)
[–] [email protected] 42 points 1 week ago (17 children)

Me and my 5.000 closest friends don't like that the website and their 1.300 partners all need my data.

load more comments (17 replies)
[–] [email protected] 33 points 1 week ago (9 children)

I liked generative AI more when it was just a funny novelty and not being advertised to everyone under the false pretenses of being smart and useful. Its architecture is incompatible with actual intelligence, and anyone who thinks otherwise is just fooling themselves. (It does make an alright autocomplete though).

load more comments (9 replies)
[–] [email protected] 32 points 1 week ago* (last edited 1 week ago) (3 children)

I have been shouting this for years. Turing and Minsky were pretty up front about this when they dropped this line of research in like 1952, even lovelace predicted this would be bullshit back before the first computer had been built.

The fact nothing got optimized, and it still didn't collapse, after deepseek? kind of gave the whole game away. there's something else going on here. this isn't about the technology, because there is no meaningful technology here.

I have been called a killjoy luddite by reddit-brained morons almost every time.

[–] [email protected] 1 points 6 days ago (1 children)

Why didn't you drop the quotes from Turing, Minsky, and Lovelace?

[–] [email protected] -1 points 6 days ago

because finding the specific stuff they said, which was in lovelace's case very broad/vague, and in turing+minsky's cases, far too technical for anyone with sam altman's dick in their mouth to understand, sounds like actual work. if you're genuinely curious, you can look up what they had to say. if you're just here to argue for this shit, you're not worth the effort.

load more comments (2 replies)
[–] [email protected] 16 points 1 week ago

There are some nice things I have done with AI tools, but I do have to wonder if the amount of money poured into it justifies the result.

[–] [email protected] 15 points 1 week ago* (last edited 1 week ago) (2 children)

The problem is that those companies are monopolies and can raise prices indefinitely to pursue this shitty dream because they got governments in their pockets. Because gov are cloud / microsoft software dependent - literally every country is on this planet - maybe except China / North Korea and Russia. They can like raise prices 10 times in next 10 years and don't give a fuck. Spend 1 trillion on AI and say we're near over and over again and literally nobody can stop them right now.

load more comments (2 replies)
load more comments
view more: next ›