this post was submitted on 04 Aug 2024
-9 points (40.8% liked)

Technology

59366 readers
3674 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] -3 points 3 months ago* (last edited 3 months ago) (1 children)

[Artificial intelligence] can be used in automated solutions and it doesn’t intrinsically have to be supervised. It being or not being intelligent is irrelevant

It's relevant because, when people talk about "AI" that's not actually intelligent (ie. all AI), they're being incoherent. What exactly are they talking about? Computers in general? It's just noise, spam, etc.

[–] [email protected] 2 points 3 months ago

It's relevant because, when people talk about "AI" that's not actually intelligent (ie. all AI), they're being incoherent. What exactly are they talking about? Computers in general? It's just noise, spam, etc.

If your objection is that AI “isn’t actually intelligent” then you’re just being pedantic and your objection has no substance. Replace “AI” with “systems that leverage machine learning solutions and that we don’t fully understand how they work” if you need to.

Did you watch the video? Do you have any familiarity with how AI technologies are being used today? At least one of those answers must be a no for you to have thought that the video’s message was incoherent.

Let me give you an example. As part of the ongoing conflict in Gaza, Israel has been using AI systems nicknamed “the Gospel” and “Lavender” to identify Hamas militants, associates, and the buildings that they operate from. Then, this information is rubber-stamped by a human analyst and then unguided missiles are sent to the identified location, often destroying entire buildings (filled with other people, generally the family of the target) to kill the identified target.

There are countless incidents of AI being used without sufficient oversight, often resulting in harm to someone - the general public, minorities, or even the business who put the AI in place.

The paperclip video is a cautionary tale against giving an AI system too much power or not enough oversight. That warning is relevant today, regardless of the precise architecture of the underlying system.