this post was submitted on 26 Feb 2025
678 points (96.4% liked)

Technology

63277 readers
5327 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

"The real benchmark is: the world growing at 10 percent," he added. "Suddenly productivity goes up and the economy is growing at a faster rate. When that happens, we'll be fine as an industry."

Needless to say, we haven't seen anything like that yet. OpenAI's top AI agent — the tech that people like OpenAI CEO Sam Altman say is poised to upend the economy — still moves at a snail's pace and requires constant supervision.

top 50 comments
sorted by: hot top controversial new old
[–] [email protected] 4 points 53 minutes ago

He probably saw that softbank and masayoshi son were heavily investing in it and figured it was dead.

[–] [email protected] 10 points 2 hours ago (1 children)

That's because they want to use AI in a server scenario where clients login. That translated to American English and spoken with honesty means that they are spying on you. Anything you do on your computer is subject to automatic spying. Like you could be totally under the radar, but as soon as you say the magic words together bam!...I'd love a sling thong for my wife...bam! Here's 20 ads, just click to purchase since they already stole your wife's boob size and body measurements and preferred lingerie styles. And if you're on McMaster... Hmm I need a 1/2 pipe and a cap...Better get two caps in case you cross thread on.....ding dong! FBI! We know you're in there! Come out with your hands up!

[–] [email protected] 2 points 50 minutes ago (1 children)

The only thing stopping me from switching to Linux is some college software (Won't need it when I'm done) and 1 game (which no longer gets updates and thus is on the path to a slow sad demise)

So I'm on the verge of going Penguin.

[–] [email protected] 3 points 36 minutes ago

Just run Windows in a VM on Linux. You can use VirtualBox.

[–] [email protected] 9 points 4 hours ago (2 children)

R&D is always a money sink

[–] [email protected] 11 points 3 hours ago (1 children)

It isn't R&D anymore if you're actively marketing it.

[–] [email protected] 6 points 3 hours ago (1 children)

Uh... Used to be, and should be. But the entire industry has embraced treating production as test now. We sell alpha release games as mainstream releases. Microsoft fired QC long ago. They push out world breaking updates every other month.

And people have forked over their money with smiles.

[–] [email protected] 0 points 3 hours ago* (last edited 3 hours ago) (1 children)

Microsoft fired QC long ago.

I can't wait until my cousin learns about this, he'll be so surprised.

I'd tell him but he's at work. At Microsoft, in quality control.

[–] [email protected] 10 points 2 hours ago (1 children)

Make sure to also tell him he's doing a shit job!

[–] [email protected] 4 points 2 hours ago (1 children)

He's probably been fired long ago, but due to non-existant QC, he was never notified.

[–] [email protected] 2 points 2 hours ago (1 children)
[–] [email protected] 0 points 2 hours ago
[–] [email protected] 20 points 4 hours ago

Especially when the product is garbage lmao

[–] [email protected] 30 points 7 hours ago (9 children)

I've been working on an internal project for my job - a quarterly report on the most bleeding edge use cases of AI, and the stuff achieved is genuinely really impressive.

So why is the AI at the top end amazing yet everything we use is a piece of literal shit?

The answer is the chatbot. If you have the technical nous to program machine learning tools it can accomplish truly stunning processes at speeds not seen before.

If you don't know how to do - for eg - a Fourier transform - you lack the skills to use the tools effectively. That's no one's fault, not everyone needs that knowledge, but it does explain the gap between promise and delivery. It can only help you do what you already know how to do faster.

Same for coding, if you understand what your code does, it's a helpful tool for unsticking part of a problem, it can't write the whole thing from scratch

[–] [email protected] 3 points 42 minutes ago

LLMs could be useful for translation between programming languages. I asked it to recently for server code given a client code in a different language and the LLM generated code was spot on!

[–] [email protected] 9 points 6 hours ago* (last edited 6 hours ago)

For coding it's also useful for doing the menial grunt work that's easy but just takes time.

You're not going to replace a senior dev with it, of course, but it's a great tool.

My previous employer was using AI for intelligent document processing, and the results were absolutely amazing. They did sink a few million dollars into getting the LLM fine tuned properly, though.

load more comments (7 replies)
[–] [email protected] 21 points 7 hours ago (1 children)

YES

YES

FUCKING YES! THIS IS A WIN!

Hopefully they curtail their investments and stop wasting so much fucking power.

[–] [email protected] 21 points 6 hours ago

I think the best way I've heard it put is "if we absolutely have to burn down a forest, I want warp drive out of it. Not a crappy python app"

[–] [email protected] 54 points 9 hours ago (1 children)

It is fun to generate some stupid images a few times, but you can't trust that "AI" crap with anything serious.

[–] [email protected] 38 points 8 hours ago (1 children)

I was just talking about this with someone the other day. While it’s truly remarkable what AI can do, its margin for error is just too big for most if not all of the use cases companies want to use it for.

For example, I use the Hoarder app which is a site bookmarking program, and when I save any given site, it feeds the text into a local Ollama model which summarizes it, conjures up some tags, and applies the tags to it. This is useful for me, and if it generates a few extra tags that aren’t useful, it doesn’t really disrupt my workflow at all. So this is a net benefit for me, but this use case will not be earning these corps any amount of profit.

On the other end, you have Googles Gemini that now gives you an AI generated answer to your queries. The point of this is to aggregate data from several sources within the search results and return it to you, saving you the time of having to look through several search results yourself. And like 90% of the time it actually does a great job. The problem with this is the goal, which is to save you from having to check individual sources, and its reliability rate. If I google 100 things and Gemini correctly answers 99 of those things accurate abut completely hallucinates the 100th, then that means that all 100 times I have to check its sources and verify that what it said was correct. Which means I’m now back to just… you know… looking through the search results one by one like I would have anyway without the AI.

So while AI is far from useless, it can’t now and never will be able to be relied on for anything important, and that’s where the money to be made is.

[–] [email protected] 9 points 7 hours ago (1 children)

Even your manual search results may have you find incorrect sources, selection bias for what you want to see, heck even AI generated slop, so the AI generated results will just be another layer on top. Link aggregating search engines are slowly becoming useless at this rate.

[–] [email protected] 7 points 7 hours ago

While that’s true, the thing that stuck out to me is not even that the AI was mislead by itself finding AI slop, or even somebody falsely asserting something. I googled something with a particular yea or no answer. “Does X technology use Y protocol”. The AI came back with “Yes it does, and here’s how it uses it”, and upon visiting the reference page for that answer, it was documentation for that technology where it explained very clearly that x technology does NOT use Y protocol, and then went into detail on why it doesn’t. So even when everything lines up and the answer is clear and unambiguous, the AI can give you an entirely fabricated answer.

[–] [email protected] 4 points 6 hours ago

AI is burning a shit ton of energy and researchers’ time though!

[–] [email protected] 155 points 13 hours ago (6 children)

JC Denton said it best in 2001:

load more comments (6 replies)
load more comments
view more: next ›