202
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
this post was submitted on 14 May 2026
202 points (91.1% liked)
Memes
55776 readers
1633 users here now
Rules:
- Be civil and nice.
- Try not to excessively repost, as a rule of thumb, wait at least 2 months to do it if you have to.
founded 7 years ago
MODERATORS
I think facial recognition technology is very different to threadiverse software. The fact that those technologies are trained on predominantly-white data is no surprise, both of your examples are data-based (ML models) where the data itself contains the bias.
I am talking more of the open-source projects, it's important; as you rightfully call out, that we have a varied group of opinions within the developer group ๐
it's not just ai or training data for it; it's developers themselves too, including fediverse ones.
the biggest non-ai/training examples that i can think of came from times before ai was ever a thing like:
usps had difficulty validating addresses because the software they obtained assumed euro-centric naming schema
airlines, health care providers, hotels, and state motor vehicle departments rejected registration/reservations because trans people have to option to select their sex
health care providers misdosed patients because the software they used didn't account for highly-athletic/bodybuilding people or people with chronic conditions
there are SO MANY examples out there where the bias clearly comes from the developer instead of the training data and there's no way that any piefed developer is immune or can even effectively mitigate their biases.