this post was submitted on 22 Feb 2024
488 points (96.2% liked)

Technology

59282 readers
4268 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

Google apologizes for ‘missing the mark’ after Gemini generated racially diverse Nazis::Google says it’s aware of historically inaccurate results for its Gemini AI image generator, following criticism that it depicted historically white groups as people of color.

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 1 points 8 months ago (1 children)

Yes, this is exactly correct. And it's not actually too slow - the specialized models can be run quite quickly, and there's various speedups like Groq.

The issue is just more cost of multiple passes, so companies are trying to have it be "all-in-one" even though cognitive science in humans isn't an all-in-one process either.

For example, AI alignment would be much better if it took inspiration from the prefrontal cortex inhibiting intrusive thoughts rather than trying to prevent the generation of the equivalent of intrusive thoughts in the first place.

[–] [email protected] 1 points 8 months ago (1 children)

The issue is just more cost of multiple passes, so companies are trying to have it be "all-in-one"

Exactly, that's where the too slow part comes in. To get more robust behavior it needs multiple layers of meta analysis, but that means it would take way more text generation under the hood than what's needed for one shot output.

[–] [email protected] 1 points 8 months ago (1 children)

Yes, but in terms of speed you don't need the same parameters and quantization for the secondary layers.

If you haven't seen it, see how fast a very capable model can actually be: https://groq.com/

[–] [email protected] 1 points 8 months ago

Yeah I've seen that. I think things will get much faster very quickly, I'm just commenting on the first Gen tech we're seeing right now.