this post was submitted on 25 Feb 2024
103 points (98.1% liked)

Technology

59168 readers
1949 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

Google says its AI image-generator would sometimes 'overcompensate' for diversity::Google apologized Friday for its faulty rollout of a new artificial intelligence image-generator, acknowledging that in some cases the tool would “overcompensate” in seeking a diverse range of people even when such a range didn’t make sense.

top 18 comments
sorted by: hot top controversial new old
[–] [email protected] 37 points 8 months ago (2 children)

The SALAMI situation is so bad.

Problem: Our training data is super racist, so it always generates white people!

Solution: Modify the prompts so that when a user asks for "a picture of a man" 10% of the time it is changed to "a picture of a BLACK man".

New problem: When the user says "A picture of a Nazi" 10% of the time our fix interprets that as "A picture of a BLACK Nazi"

[–] [email protected] 12 points 8 months ago (1 children)

Also, when the prompt is modified to include "native american" it seems to mostly return the most stereotypically dressed people possible. Like wearing traditional garbs and headdresses when everyone else portrayed is wearing setting appropriate clothing.

[–] [email protected] 5 points 8 months ago

Yep, it's racism piled on top of racism. Aboriginal people are rarely included in the training data, but when they are it's mostly wearing what they wear for tourists, and rarely what they wear on a day-to-day basis in the modern world. As a result, that's what you get in the output.

The real fix would be to fix the training data, but that's difficult. It's much easier to train the SALAMI on the racist things that you find all over the web, than to be selective and say "sure, this may be on the web, but it isn't representative of reality".

[–] [email protected] 11 points 8 months ago (1 children)

So it's a glorified chat database.

[–] [email protected] 2 points 8 months ago

The input to an LLM is effectively a huge quantity of text including chats. What the generative LLM does is nothing more than fancy auto-complete, finding the next word, then the next word, then the next word...

[–] [email protected] 31 points 8 months ago (1 children)

Articles like this always have a photo of at least one device showing a giant logo of the company for some reason

[–] [email protected] 15 points 8 months ago (1 children)

You're asking for more diversity in article images

[–] [email protected] 3 points 8 months ago

Inb4 they replace the logos with logos of unrelated companies.

[–] [email protected] 20 points 8 months ago (2 children)
[–] [email protected] 26 points 8 months ago (1 children)

Netflix was the king of over doing it on just about all fronts. For a while, they went absolutely crazy with the non-english subs. No, my account wasn't leaked or anything but I do think Netflix really wanted me to learn several other languages.

Currently, "my recommendations" are the complete opposite of the kind of shows I watch and just about every movie is #1 in the US if you look at enough Netflix accounts.

(The king of shit recommendations is YouTube. Did you have to make it through an intro to find out that it's the wrong subject? Here! Let's completely fill your feed with those kinds of videos now!)

[–] [email protected] 6 points 8 months ago

Sir, kindly remember the Wadsworth Constant when browsing YouTube.

[–] [email protected] 6 points 8 months ago (2 children)

Every show on netflix has at least one homosexual couple. It's hilarious once you start pointing it out.

[–] [email protected] 3 points 8 months ago

How many of those have heterosexual couples too? If they do, is that hilarious as well?

[–] [email protected] 2 points 8 months ago (1 children)

What about hetero couples in every show everywhere though? Also hilarious?

[–] [email protected] 6 points 8 months ago (1 children)

I do know what you mean. But I'm taking this in another direction to point out that, Christ, a lot of writers seem unable to not make the male and female leads of anything fall in love for no reason. Literally most of the forced romance is hetero romance.

[–] [email protected] 1 points 8 months ago

True! Can there be no other kinds of subplots?

[–] [email protected] 5 points 8 months ago

This is the best summary I could come up with:


“It’s clear that this feature missed the mark,” said a blog post Friday from Prabhakar Raghavan, a senior vice president who runs Google’s search engine and other businesses.

In a 2022 technical paper, the researchers who developed Imagen warned that generative AI tools can be used for harassment or spreading misinformation “and raise many concerns regarding social and cultural exclusion and bias.” Those considerations informed Google’s decision not to release “a public demo” of Imagen or its underlying code, the researchers added at the time.

Since then, the pressure to publicly release generative AI products has grown because of a competitive race between tech companies trying to capitalize on interest in the emerging technology sparked by the advent of OpenAI’s chatbot ChatGPT.

Microsoft had to adjust its own Designer tool several weeks ago after some were using it to create deepfake pornographic images of Taylor Swift and other celebrities.

Studies have also shown AI image-generators can amplify racial and gender stereotypes found in their training data, and without filters they are more likely to show lighter-skinned men when asked to generate a person in various contexts.

University of Washington researcher Sourojit Ghosh, who has studied bias in AI image-generators, said Friday he was disappointed that Raghavan’s message ended with a disclaimer that the Google executive “can’t promise that Gemini won’t occasionally generate embarrassing, inaccurate or offensive results.”


The original article contains 807 words, the summary contains 227 words. Saved 72%. I'm a bot and I'm open source!

[–] [email protected] 0 points 8 months ago

Funny, all of modern media is overcompensating for diversity