16
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
this post was submitted on 08 Jun 2025
16 points (100.0% liked)
askchapo
23048 readers
111 users here now
Ask Hexbear is the place to ask and answer ~~thought-provoking~~ questions.
Rules:
-
Posts must ask a question.
-
If the question asked is serious, answer seriously.
-
Questions where you want to learn more about socialism are allowed, but questions in bad faith are not.
-
Try [email protected] if you're having questions about regarding moderation, site policy, the site itself, development, volunteering or the mod team.
founded 4 years ago
MODERATORS
A few months ago a guest on trashfuture was talking about this. They said how a lot of this "predictive/AI/surveillance and recognition" technology is not more efficient or effective than having a human do the same thing, but what it does is that it obscures the decision making process, making it impossible for any kind of accountability or oversight of actions taken by fully automated or semi-automated "defense" systems. If target acquisition or firing is supported by AI, who's to say the software didn't just glitch and tell the poor innocent war criminal to shoot an unarmed child?
And in addition to this, it adds a layer of abstraction between the operator and the consequence of their input, making barbarous acts of violence mundane. We already have a very good example of this with drone operators: they just press A on an Xbox controller and someone, along with their whole family just blow up. Adding a probabilistic layer to this kind of dynamic just works in a similar way to a light switch with "shabat mode": you flip the switch, and the computer in it will randomly turn on the light, so you don't think that you turned the light on, even if the result is always the same.