this post was submitted on 17 Jan 2024
33 points (86.7% liked)

Technology

34698 readers
402 users here now

This is the official technology community of Lemmy.ml for all news related to creation and use of technology, and to facilitate civil, meaningful discussion around it.


Ask in DM before posting product reviews or ads. All such posts otherwise are subject to removal.


Rules:

1: All Lemmy rules apply

2: Do not post low effort posts

3: NEVER post naziped*gore stuff

4: Always post article URLs or their archived version URLs as sources, NOT screenshots. Help the blind users.

5: personal rants of Big Tech CEOs like Elon Musk are unwelcome (does not include posts about their companies affecting wide range of people)

6: no advertisement posts unless verified as legitimate and non-exploitative/non-consumerist

7: crypto related posts, unless essential, are disallowed

founded 5 years ago
MODERATORS
all 7 comments
sorted by: hot top controversial new old
[–] [email protected] 9 points 9 months ago (3 children)

should be readily apparent that no AI used to kill can ever be ethical

[–] [email protected] 10 points 9 months ago (2 children)

But if it kills everyone, it can be fair.

[–] [email protected] 9 points 9 months ago

this is a great illustration of the difference between fair and ethical

[–] [email protected] 8 points 9 months ago

But how will we automate our trolley problems?

[–] [email protected] 1 points 9 months ago

Are you suggesting it's never ethical to kill? Nothing is black and white, especially when it comes to ethics.

[–] [email protected] 2 points 9 months ago

This is the best summary I could come up with:


Since 2017, Ito financed many projects through the $27 million Ethics and Governance of AI Fund, an initiative anchored by the MIT Media Lab and the Berkman Klein Center for Internet and Society at Harvard University.

Inspired by whistleblower Signe Swenson and others who have spoken out, I have decided to report what I came to learn regarding Ito’s role in shaping the field of AI ethics, since this is a matter of public concern.

At the Media Lab, I learned that the discourse of “ethical AI,” championed substantially by Ito, was aligned strategically with a Silicon Valley effort seeking to avoid legally enforceable restrictions of controversial technologies.

Although the Silicon Valley lobbying effort has consolidated academic interest in “ethical AI” and “fair algorithms” since 2016, a handful of papers on these topics had appeared in earlier years, even if framed differently.

I wrote, “If tens of millions of dollars from nonprofit foundations and individual donors are not enough to allow us to take a bold position and join the right side, I don’t know what would be.” (Omidyar funds The Intercept.)

For example, the board notes that although “the term ‘fairness’ is often cited in the AI community,” the recommendations avoid this term because of “the DoD mantra that fights should not be fair, as DoD aims to create the conditions to maintain an unfair advantage over any potential adversaries.” Thus, “some applications will be permissibly and justifiably biased,” specifically “to target certain adversarial combatants more successfully.” The Pentagon’s conception of AI ethics forecloses many important possibilities for moral deliberation, such as the prohibition of drones for targeted killing.


The original article contains 3,335 words, the summary contains 270 words. Saved 92%. I'm a bot and I'm open source!