this post was submitted on 30 Mar 2025
24 points (100.0% liked)

TechTakes

1749 readers
69 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS
top 1 comments
sorted by: hot top controversial new old
[–] [email protected] 8 points 3 days ago

Within the last few years, and particularly the last few months, we’ve heard this refrain: AI is the reason for an abuse committed by a corporation, military, or other powerful entity.

Oh, how does that '70s IBM memo run again? But this is both the allure — perhaps even a selling point — as well as the Achilles heel of "AI". When companies and authorities let it make decisions, they do so with the illusion that nobody is responsible for the consequences of those decisions.

The "AI" bros will have us know that as a technology, it is totally neutral. "It's just maths, you can't regulate maths". Yet they attribute faculty and agency to algorithms that acquits their users from responsibility, while denying the responsibility to the people whose work the algorithms are trained on.

Hopefully this first wave of realisation — that trusting faulty statistical implementations to do humans' work always come at a greater cost than the projected savings — will be the last. We need a good Butlerian Jihad so we can get back to our actual, global problems.