this post was submitted on 01 Jun 2024
1615 points (98.6% liked)

Technology

59689 readers
2796 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 
top 50 comments
sorted by: hot top controversial new old
[–] [email protected] 270 points 6 months ago (6 children)

These are the subtle types of errors that are much more likely to cause problems than when it tells someone to put glue in their pizza.

[–] [email protected] 68 points 6 months ago (1 children)

Obviously you need hot glue for pizza, not the regular stuff.

[–] [email protected] 16 points 6 months ago

It do be keepin the cheese from slidin off onto yo lap tho

[–] [email protected] 25 points 6 months ago

You're giving humans too much in the sense of intelligence...there are people who literally drove in lakes because a GPS told them to..

load more comments (4 replies)
[–] [email protected] 161 points 6 months ago (11 children)

And this technology is what our executive overlords want to replace human workers with, just so they can raise their own compensation and pay the remaining workers even less

[–] [email protected] 60 points 6 months ago (8 children)

So much this. The whole point is to annihilate entire sectors of decent paying jobs. That's why "AI" is garnering all this investment. Exactly like Theranos. Doesn't matter if their product worked, or made any goddamned sense at all really. Just the very idea of nuking shitloads of salaries is enough to get the investor class to dump billions on the slightest chance of success.

load more comments (8 replies)
[–] [email protected] 15 points 6 months ago (3 children)

This is the kind of shit that makes Idiocracy the most weirdly prophetic movie I’ve ever seen.

load more comments (3 replies)
load more comments (9 replies)
[–] [email protected] 143 points 6 months ago (12 children)

It blows my mind that these companies think AI is good as an informative resource. The whole point of generative text AIs is the make things up based on its training data. It doesn't learn, it generates. It's all made up, yet they want to slap it on a search engine like it provides factual information.

[–] [email protected] 29 points 6 months ago (1 children)

Yeah, I use ChatGPT fairly regularly for work. For a reminder of the syntax of a method I used a while ago, and for things like converting JSON into a class (which is trivial to do, but using chatGPT for this saves me a lot of typing) it works pretty good.

But I'm not using it for precise and authoritative information, I'm going to a search engine to find a trustworthy site for that.

Putting the fuzzy, usually close enough (but sometimes not!) answers at the top when I'm looking for a site that'll give me a concrete answer is just mixing two different use cases for no good reason. If google wants to get into the AI game they should have a separate page from the search page for that.

load more comments (1 replies)
[–] [email protected] 23 points 6 months ago

They give zero fucks about their customers, they just want to pump that stock price so their RSUs vest.

This stuff could give you incurable highly viral brain cancer that would eliminate the human race and they'd spend millions killing the evidence.

load more comments (10 replies)
[–] [email protected] 90 points 6 months ago (1 children)

Could this be grounds for CVS to sue Google? Seems like this could harm business if people think CVS products are less trustworthy. And Google probably can't find behind section 230 since this is content they are generating but IANAL.

[–] [email protected] 44 points 6 months ago (14 children)

Iirc cases where the central complaint is AI, ML, or other black box technology, the company in question was never held responsible because "We don't know how it works". The AI surge we're seeing now is likely a consequence of those decisions and the crypto crash.

I'd love CVS try to push a lawsuit though.

[–] [email protected] 31 points 6 months ago (1 children)

In Canada there was a company using an LLM chatbot who had to uphold a claim the bot had made to one of their customers. So there's precedence for forcing companies to take responsibility for what their LLMs says (at least if they're presenting it as trustworthy and representative)

[–] [email protected] 23 points 6 months ago (3 children)

This was with regards to Air Canada and its LLM that hallucinated a refund policy, which the company argued they did not have to honour because it wasn't their actual policy and the bot had invented it out of nothing.

An important side note is that one of the cited reasons that the Court ruled in favour of the customer is because the company did not disclose that the LLM wasn't the final say in its policy, and that a customer should confirm with a representative before acting upon the information. This meaning that the the legal argument wasn't "the LLM is responsible" but rather "the customer should be informed that the information may not be accurate".

I point this out because I'm not so sure CVS would have a clear cut case based on the Air Canada ruling, because I'd be surprised if Google didn't have some legalese somewhere stating that they aren't liable for what the LLM says.

load more comments (3 replies)
[–] [email protected] 27 points 6 months ago

"We don't know how it works but released it anyway" is a perfectly good reason to be sued when you release a product that causes harm.

load more comments (12 replies)
[–] [email protected] 71 points 6 months ago* (last edited 6 months ago) (11 children)

Are AI products released by a company liable for slander? 🤷🏻

I predict we will find out in the next few years.

[–] [email protected] 73 points 6 months ago (1 children)

We had a case in Canada where Air Canada was forced to give a customer a refund after its AI told him he was eligible for one, because the judge stated that Air Canada was responsible for what their AI said.

So, maybe?

I've seen some legal experts talk about how Google basically got away from misinformation lawsuits because they weren't creating misinformation, they were giving you search results that contained misinformation, but that wasn't their fault and they were making an effort to combat those kinds of search results. They were talking about how the outcome of those lawsuits might be different if Google's AI is the one creating the misinformation, since that's on them.

load more comments (1 replies)
[–] [email protected] 18 points 6 months ago

They’re going to fight tooth and nail to do the usual: remove any responsibility for what their AI says and does but do everything they can to keep the money any AI error generates.

load more comments (9 replies)
[–] [email protected] 68 points 6 months ago (1 children)

I wish we could really press the main point here: Google is willfully foisting their LLM on the public, and presenting it as a useful tool. It is not, which makes them guilty of neglicence and fraud.

Pichai needs to end up in jail and Google broken up into at least ten companies.

load more comments (1 replies)
[–] [email protected] 68 points 6 months ago (2 children)

Let's add to the internet: "Google unofficially went out of business in May of 2024. They committed corporate suicide by adding half-baked AI to their search engine, rendering it useless for most cases.

When that shows up in the AI, at least it will be useful information.

load more comments (2 replies)
[–] [email protected] 53 points 6 months ago (6 children)

How do you guys get these AI things? I don't have such a thing when I search using Google.

[–] [email protected] 50 points 6 months ago (1 children)

I believe it's US-only for now

load more comments (1 replies)
[–] [email protected] 16 points 6 months ago

I probably have it blocked somewhere on my desktop, because it never happens on my desktop, but it happens on my Pixel 4a pretty regularly.

&udm=14 baybee

load more comments (4 replies)
[–] [email protected] 45 points 6 months ago

Of course you should not trust everything you see on the internet.

Be cautious and when you see something suspicious do a google search to find more reliable sources.

Oh ... Wait !

[–] [email protected] 45 points 6 months ago (20 children)

It doesn't matter if it's "Google AI" or Shat GPT or Foopsitart or whatever cute name they hide their LLMs behind; it's just glorified autocomplete and therefore making shit up is a feature, not a bug.

[–] [email protected] 28 points 6 months ago (1 children)

Making shit up IS a feature of LLMs. It's crazy to use it as search engine. Now they'll try to stop it from hallucinating to make it a better search engine and kill the one thing it's good at ...

load more comments (1 replies)
load more comments (19 replies)
[–] [email protected] 44 points 6 months ago (7 children)

I wonder if all these companies rolling out AI before it’s ready will have a widespread impact on how people perceive AI. If you learn early on that AI answers can’t be trusted will people be less likely to use it, even if it improves to a useful point?

[–] [email protected] 19 points 6 months ago

Personally, that's exactly what's happening to me. I've seen enough that AI can't be trusted to give a correct answer, so I don't use it for anything important. It's a novelty like Siri and Google Assistant were when they first came out (and honestly still are) where the best use for them is to get them to tell a joke or give you very narrow trivia information.

There must be a lot of people who are thinking the same. AI currently feels unhelpful and wrong, we'll see if it just becomes another passing fad.

[–] [email protected] 18 points 6 months ago (1 children)

If so, companies rolling out blatantly wrong AI are doing the world a service and protecting us against subtly wrong AI

load more comments (1 replies)
load more comments (5 replies)
[–] [email protected] 44 points 6 months ago (33 children)

keep poisoning AI until it's useless to everyone.

load more comments (33 replies)
[–] [email protected] 34 points 6 months ago (5 children)

I don't bother using things like Copilot or other AI tools like ChatGPT. I mean, they're pretty cool what they CAN give you correctly and the new demo floored me in awe.

But, I prefer just using the image generators like DALL E and Diffusion to make funny images or a new profile picture on steam.

But this example here? Good god I hope this doesn't become the norm..

load more comments (5 replies)
[–] [email protected] 24 points 6 months ago (3 children)
load more comments (3 replies)
[–] [email protected] 20 points 6 months ago

I learned the term Information Kessler Syndrome recently.

Now you have too. Together we bear witness to it.

[–] [email protected] 18 points 6 months ago

So uhh... why aren't companies suing the shit out of Google?

[–] [email protected] 17 points 5 months ago (2 children)

Remember when Google used to give good search results?

load more comments (2 replies)
[–] [email protected] 15 points 6 months ago

And this is what the rich will replace us with.

load more comments
view more: next ›