this post was submitted on 11 Feb 2025
523 points (98.7% liked)
Technology
62161 readers
3738 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Turns out, spitting out words when you don't know what anything means or what "means" means is bad, mmmmkay.
Introduced factual errors
Yeah that's . . . that's bad. As in, not good. As in - it will never be good. With a lot of work and grinding it might be "okay enough" for some tasks some day. That'll be another 200 Billion please.
that's the core problem though, isn't it. They are just predictive text machines, not understanding what they are saying. Yet we are treating them as if they were some amazing solution to all our problems
Well, "we" arent' but there's a hype machine in operation bigger than anything in history because a few tech bros think they're going to rule the world.
I'll be here begging for a miserable 1 million to invest in some freaking trains and bicycle paths. Thanks.
How good are the human answers? I mean, I expect that an AI's error rate is currently higher than an "expert" in their field.
But I'd guess the AI is quite a bit better than, say, the average Republican.
I guess you don’t get the issue. You give the AI some text to summarize the key points. The AI gives you wrong info in a percentage of those summaries.
There’s no point in comparing this to a human, since this is usually something done for automation, that is, to work for a lot of people or a large quantity of articles. At best you can compare it to other automated summaries that existed before LLMs, which might not have all the info, but won’t make up random facts that aren’t in the article.
I'm more interested in the technology itself, rather than its current application.
I feel like I am watching a toddler taking her first steps; wondering what she will eventually accomplish in her lifetime. But the loudest voices aren't cheering her on: they're sitting in their recliners, smugly claiming she's useless. She can't even participate in a marathon, let alone compete with actual athletes!
Basically, the best AIs currently have college-level mastery of language, and the reasoning skills of children. They are already far more capable and productive than anti-vaxxers, or our current president.
It’s not the people that simply decided to hate on AI, it was the sensationalist media hyping it up so much to the point of scaring people: “it’ll take all your jobs”, or companies shoving it down our throats by putting it in every product even when it gets in the way of the actual functionality people want to use. Even my company “forces” us all to use X prompts every week as a sign of being “productive”. Literally every IT consultancy in my country has a ChatGPT wrapper that they’re trying to sell and they think they’re different because of it. The result couldn’t be different, when something gets too much exposure it also gets a lot of hate, especially when it is forced down on people.
alternatively: 49% had no significant issues and 81% had no factual errors, it's not perfect but it's cheap quick and easy.
Flip a coin every time you read an article whether you get quick and easy significant issues
It's easy, it's quick, and it's free: pouring river water in your socks.
Fortunately, there are other possible criteria.
If it doesn't work then quick cheap and easy I'd pointless.
I'll make you dinner every night for free but one night a week it will make you ill. Maybe a little maybe a lot.
Do you dislike ai?
I don't necessarily dislike "AI" but I reserve the right to be derisive about inappropriate use, which seems to be pretty much every use.
Using AI to find pertoglyphs in Peru was cool. Reviewing medical scans is pretty great. Everything else is shit.
I work in tech and can confirm the the vast majority of engineers "dislike ai" and are disillusioned with AI tools. Even ones that work on AI/ML tools. It's fewer and fewer people the higher up the pay scale you go.
There isn't a single complex coding problem an AI can solve. If you don't understand something and it helps you write it I'll close the MR and delete your code since it's worthless. You have to understand what you write. I do not care if it works. You have to understand every line.
"But I use it just fine and I'm an..."
Then you're not an engineer and you shouldn't have a job. You lack the intelligence, dedication and knowledge needed to be one. You are detriment to your team and company.
That's some weird gatekeeping. Why stop there? Whoever is using a linter is obviously too stupid to write clean code right off the bat. Syntax highlighting is for noobs.
I full-heartedly dislike people that think they need to define some arcane rules how a task is achieved instead of just looking at the output.
Accept that you probably already have merged code that was generated by AI and it's totally fine as long as tests are passing and it fits the architecture.
You're supposed to gatekeep code. There is nothing wrong with gatekeeping things that aren't hobbies.
If someone can't explain every change they're making and why they chose to do it that way they're getting denied. The bar is low.
"I can calculate powers with decimal values in the exponent and if you can not do that on paper but instead use these machines, your calculations are worthless and you are not an engineer"
You seem to fail to see that this new tool has unique strengths. As the other guy said, it is just like people ranting about Wikipedia. Absurd.
You can also just have an application designed to do that do it more accurately.
If you can't do that you're not an engineer. If you don't recommend that you're not an engineer.
Is it worse than the current system of editors making shitty click bait titles?
Surprisingly, yes