this post was submitted on 01 Jun 2024
1614 points (98.6% liked)

Technology

59299 readers
4422 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 71 points 5 months ago* (last edited 5 months ago) (7 children)

Are AI products released by a company liable for slander? 🤷🏻

I predict we will find out in the next few years.

[–] [email protected] 73 points 5 months ago (1 children)

We had a case in Canada where Air Canada was forced to give a customer a refund after its AI told him he was eligible for one, because the judge stated that Air Canada was responsible for what their AI said.

So, maybe?

I've seen some legal experts talk about how Google basically got away from misinformation lawsuits because they weren't creating misinformation, they were giving you search results that contained misinformation, but that wasn't their fault and they were making an effort to combat those kinds of search results. They were talking about how the outcome of those lawsuits might be different if Google's AI is the one creating the misinformation, since that's on them.

[–] [email protected] 7 points 5 months ago

Yeah the Air Canada case probably isn't a big indicator on where the legal system will end up on this. The guy was entitled to some money if he submitted the request on time, but the reason he didn't was because the chatbot gave the wrong information. It's the kind of case that shouldn't have gotten to a courtroom, because come on, you're supposed to give him the money any it's just some paperwork screwup caused by your chatbot that created this whole problem.

In terms of someone someone getting sick because they put glue on their pizza because google's AI told them to... we'll have to see. They may do the thing where "a reasonable person should know that the things an AI says isn't always fact" which will probably hold water if google keeps a disclaimer on their AI generated results.

[–] [email protected] 18 points 5 months ago

They’re going to fight tooth and nail to do the usual: remove any responsibility for what their AI says and does but do everything they can to keep the money any AI error generates.

[–] [email protected] 15 points 5 months ago* (last edited 5 months ago) (1 children)

Slander is spoken. In print, it's libel.

- J. Jonah Jameson

[–] [email protected] 2 points 5 months ago

That's ok, ChatGPT can talk now.

[–] [email protected] 8 points 5 months ago (1 children)

At the least it should have a prominent "for entertainment purposes only", except it fails that purpose, too

[–] [email protected] 12 points 5 months ago* (last edited 5 months ago)

I think the image generators are good for generating shitposts quickly. Best use case I’ve found thus far. Not worth the environmental impact, though.

[–] [email protected] 5 points 5 months ago

Tough question. I doubt it though. I would guess they would have to prove mal intent in some form. When a person slanders someone they use a preformed bias to promote oneself while hurting another intentionally. While you can argue the learned data contained a bias, it promotes itself by being a constant source of information that users can draw from and therefore make money and it would in theory be hurting the company. Did the llm intentionally try to hurt the company would be the last bump. They all have holes. If I were a judge/jury and you gave me the decisions I would say it isn't beyond a reasonable doubt.

[–] [email protected] 4 points 5 months ago (1 children)

If you're a start up I guarantee it is

Big tech.... I'll put my chips in hell no

[–] [email protected] 1 points 5 months ago (1 children)

Yet another nail in the coffin of rule of law.

[–] [email protected] 1 points 5 months ago

🤑🤑🤑🤑

[–] [email protected] 4 points 5 months ago

Slander/libel nothing. It's going to end up killing someone.