this post was submitted on 23 May 2024
110 points (95.1% liked)

Technology

34686 readers
291 users here now

This is the official technology community of Lemmy.ml for all news related to creation and use of technology, and to facilitate civil, meaningful discussion around it.


Ask in DM before posting product reviews or ads. All such posts otherwise are subject to removal.


Rules:

1: All Lemmy rules apply

2: Do not post low effort posts

3: NEVER post naziped*gore stuff

4: Always post article URLs or their archived version URLs as sources, NOT screenshots. Help the blind users.

5: personal rants of Big Tech CEOs like Elon Musk are unwelcome (does not include posts about their companies affecting wide range of people)

6: no advertisement posts unless verified as legitimate and non-exploitative/non-consumerist

7: crypto related posts, unless essential, are disallowed

founded 5 years ago
MODERATORS
 

AI projects like OpenAI’s ChatGPT get part of their savvy from some of the lowest-paid workers in the tech industry—contractors often in poor countries paid small sums to correct chatbots and label images. On Wednesday, 97 African workers who do AI training work or online content moderation for companies like Meta and OpenAI published an open letter to President Biden, demanding that US tech companies stop “systemically abusing and exploiting African workers.”

A typical workday for African tech contractors, the letter says, involves “watching murder and beheadings, child abuse and rape, pornography and bestiality, often for more than 8 hours a day.” Pay is often less than $2 per hour, it says, and workers frequently end up with post-traumatic stress disorder, a well-documented issue among content moderators around the world.

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 2 points 4 months ago

Nah, all the original data came from humans. If it was all good and happy and properly tagged correctly there'd be no intervention.

Unfortunately they scraped it from wherever in the hell they could get it from and it's not all tagged correctly.

I'm sure they use more AI to pre-grade it, but at some point a set of real eyes need to verify that something is what it's supposed to be.

This is more of a blood diamonds or fair trade coffee thing, US legislation isn't going to have anything to do with it. You need to expose the places using the data.