this post was submitted on 27 Mar 2025
662 points (94.6% liked)
Technology
68066 readers
4632 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I am once again begging journalists to be more critical ~~of tech companies~~.
This is the wrong comparison. These are taxis, which means they're driving taxi miles. They should be compared to taxis, not normal people who drive almost exclusively during their commutes (which is probably the most dangerous time to drive since it's precisely when they're all driving).
We also need to know how often Waymo intervenes in the supposedly autonomous operations. The latest we have from this, which was leaked a while back, is that Cruise (different company) cars are actually less autonomous than taxis, and require >1 employee per car.
edit: The leaked data on human interventions was from Cruise, not Waymo. I'm open to self-driving cars being safer than humans, but I don't believe a fucking word from tech companies until there's been an independent audit with full access to their facilities and data. So long as we rely on Waymo's own publishing without knowing how the sausage is made, they can spin their data however they want.
edit2: Updated to say that ournalists should be more critical in general, not just about tech companies.
Journalist aren't even critical of police press releases anymore, most simply print whatever they're told verbatim. It may as well just be advertisement.
I agree with you so strongly that I went ahead and updated my comment. The problem is general and out of control. Orwell said it best: "Journalism is printing something that someone does not want printed. Everything else is public relations."
The meat of the true issue right here. Journalism and investigative journalism aren't just dead, their corpses has been feeding a palm tree like a pod of beached whales for decades. It's a bizarre state of affairs to read news coverage and come out the other side less informed, without reading literal disinformation. It somehow seems so much worse that they're not just off-target, but that they don't even understand why or how they're fucking it up.
I was going to say they should only be comparing them under the same driving areas, since I know they aren't allowed in many areas.
But you're right, it's even tighter than that.
These articles frustrate the shit out of me. They accept both the company's own framing and its selectively-released data at face value. If you get to pick your own framing and selectively release the data that suits you, you can justify anything.
@[email protected] @[email protected]
to amplify the previous point, taps the sign as Joseph Weizenbaum turns over in his grave
tl;dr A driverless car cannot possibly be "better" at driving than a human driver. The comparison is a category error and therefore nonsensical; it's also a distraction from important questions of morality and justice. More below.
Numerically, it may some day be the case that driverless cars have fewer wrecks than cars driven by people.(1) Even so, it will never be the case that when a driverless car hits and kills a child the moral situation will be the same as when a human driver hits and kills a child. In the former case the liability for the death would be absorbed into a vast system of amoral actors with no individuals standing out as responsible. In effect we'd amortize and therefore minimize death with such a structure, making it sociopathic by nature and thereby adding another dimension of injustice to every community where it's deployed.(2) Obviously we've continually done exactly this kind of thing since the rise of modern technological life, but it's been sociopathic every time and we all suffer for it despite rampant narratives about "progress" etc.
It will also never be the case that a driverless car can exercise the judgment humans have to decide whether one risk is more acceptable than another, and then be held to account for the consequences of their choice. This matters.
Please (re-re-)read Weizenbaum's book if you don't understand why I can state these things with such unqualified confidence.
Basically, we all know damn well that whenever driverless cars show some kind of numerical superiority to human drivers (3) and become widespread, every time one kills, let alone injures, a person no one will be held to account for it. Companies are angling to indemnify themselves from such liability, and even if they accept some of it no one is going to prison on a manslaughter charge if a driverless car kills a person. At that point it's much more likely to be treated as an unavoidable act of nature no matter how hard the victim's loved ones reject that framing. How high a body count do our capitalist systems need to register before we all internalize this basic fact of how they operate and stop apologizing for it?
(1) Pop quiz! Which seedy robber baron has been loudly claiming for decades now that full self driving is only a few years away, and depends on people believing in that fantasy for at least part of his fortune? We should all read Wrong Way by Joanne McNeil to see the more likely trajectory of "driverless" or "self-driving" cars.
(2) Knowing this, it is irresponsible to put these vehicles on the road, or for people with decision-making power to allow them on the road, until this new form of risk is understood and accepted by the community. Otherwise you're forcing a community to suffer a new form of risk without consent and without even a mitigation plan, let alone a plan to compensate or otherwise make them whole for their new form of loss.
(3) Incidentally, quantifying aspects of life and then using the numbers, instead of human judgement, to make decisions was a favorite mission of eugenicists, who stridently pushed statistics as the "right" way to reason to further their eugenic causes. Long before Zuckerberg's hot or not experiment turned into Facebook, eugenicist Francis Galton was creeping around the neighborhoods of London with a clicker hidden in his pocket counting the "attractive" women in each, to identify "good" and "bad" breeding and inform decisions about who was "deserving" of a good life and who was not. Old habits die hard.
Honestly I should just get that slide tattooed to my forehead next to a QR code to Weizenbaum's book. It'd save me a lot of talking!
So let me make sure I understand your argument. Because nobody can be held liable for one hypothetical death of a child when an accident happens with a self driving car, we should ban them so that hundreds of real children can be killed instead. Is that what you are saying?
As far as I know of, Waymo has only been involved in one fatality. The Waymo was sitting still at a red light in traffic when a speeding SUV (according to reports going at extreme rate of speed) rammed it from behind into other cars. The SUV then continued into traffic where it struck more cars, eventually killing someone. That's the only fatal accident Waymo has been involved in after 50 million miles of driving. But instead of making it safer for children, you would prefer more kids die just so you have someone to blame?
@[email protected]
No, this strawman is obviously not my argument. It's curious you're asking whether you understand, and then opining afterwards, rather than waiting for the clarification you suggest you're seeking. When someone responds to a no-brainer suggestion, grounded in skepticism but perfectly sensible nevertheless, with a strawman seemingly crafted to discredit it, one has to wonder if that someone is writing in good faith. Are you?
For anyone who is reading in good faith: we're clearly not talking about one hypothetical death, since more than one real death involving driverless car technology has already occurred, and there is no doubt there will be more in the future given the nature of conducting a several-ton hunk of metal across public roads at speed.
It should go without saying that hypothetical auto wreck fatalities occurring prior to the deployment of technology are not the fault of everyone who delayed the deployment of that technology, meaning in particular that these hypothetical deaths do not justify hastening deployment. This is a false conflation regardless of how many times Marc Andreesen and his apostles preach variations of it.
Finally "ban", or any other policy prescription for that matter, appeared nowhere in my post. That's the invention of this strawman's author (you can judge for yourself what the purpose of such an invention might be). What I urge is honestly attending to the serious and deadly important moral and justice questions surrounding the deployment of this class of technology before it is fully unleashed on the world, not after. Unless one is so full up with the holy fervor of technoutopianism that one's rationality has taken leave, this should read as an anodyne and reasonable suggestion.
I was asking in good faith because the way you talk is not easily comprehensible. I can barely follow whatever argument you are trying to make. I think you are trying to say that we shouldn't allow them on the road until we have fully decided who is at fault in an accident?
Also, only one death has occurred so far involving driverless cars, which is where a speeding SUV rammed into a stopped driverless car and then the SUV continued on and hit 5 other cars where it killed someone. That's it. The only death involved a driverless car sitting still, not moving, not doing anything... and it wasn't even the car that hit the car in which the person died. So I would say it is hypothetical when talking about hypothetical deaths that are the fault of a driverless car.