The SALAMI situation is so bad.
Problem: Our training data is super racist, so it always generates white people!
Solution: Modify the prompts so that when a user asks for "a picture of a man" 10% of the time it is changed to "a picture of a BLACK man".
New problem: When the user says "A picture of a Nazi" 10% of the time our fix interprets that as "A picture of a BLACK Nazi"