this post was submitted on 01 Jun 2024
1615 points (98.6% liked)

Technology

59689 readers
2860 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 44 points 6 months ago (6 children)

I wonder if all these companies rolling out AI before it’s ready will have a widespread impact on how people perceive AI. If you learn early on that AI answers can’t be trusted will people be less likely to use it, even if it improves to a useful point?

[–] [email protected] 19 points 6 months ago

Personally, that's exactly what's happening to me. I've seen enough that AI can't be trusted to give a correct answer, so I don't use it for anything important. It's a novelty like Siri and Google Assistant were when they first came out (and honestly still are) where the best use for them is to get them to tell a joke or give you very narrow trivia information.

There must be a lot of people who are thinking the same. AI currently feels unhelpful and wrong, we'll see if it just becomes another passing fad.

[–] [email protected] 18 points 6 months ago (1 children)

If so, companies rolling out blatantly wrong AI are doing the world a service and protecting us against subtly wrong AI

[–] [email protected] 3 points 6 months ago

Google were the good guys after all????

[–] [email protected] 7 points 6 months ago* (last edited 6 months ago)

To be fair, you should fact check everything you read on the internet, no matter the source (though I admit that's getting more difficult in this era of shitty search engines). AI can be a very powerful knowledge-acquiring tool if you take everything it tells you with a grain of salt, just like with everything else.

This is one of the reasons why I only use AI implementations that cite their sources (edit: not Google's), cause you can just check the source it used and see for yourself how much is accurate, and how much is hallucinated bullshit. Hell, I've had AI cite an AI generated webpage as its source on far too many occasions.

Going back to what I said at the start, have you ever read an article or watched a video on a subject you're knowledgeable about, just for fun to count the number of inaccuracies in the content? Real eye-opening shit. Even before the age of AI language models, misinformation was everywhere online.

[–] [email protected] 5 points 6 months ago

will have a widespread impact on how people perceive AI

Here's hoping.

[–] [email protected] -1 points 6 months ago (1 children)

I'm no defender of AI and it just blatantly making up fake stories is ridiculous. However, in the long term, as long as it does eventually get better, I don't see this period of low to no trust lasting.

Remember how bad autocorrect was when it first rolled out? people would always be complaining about it and cracking jokes about how dumb it is. then it slowly got better and better and now for the most part, everyone just trusts their phones to fix any spelling mistakes they make, as long as it's close enough.

[–] [email protected] 1 points 5 months ago

There's a big difference between my phone changing caulk to cock and my phone telling me to make pizza with Elmer's glue