[-] [email protected] 52 points 2 months ago* (last edited 2 months ago)

I am once again begging journalists to be more critical ~~of tech companies~~.

But as this happens, it’s crucial to keep the denominator in mind. Since 2020, Waymo has reported roughly 60 crashes serious enough to trigger an airbag or cause an injury. But those crashes occurred over more than 50 million miles of driverless operations. If you randomly selected 50 million miles of human driving—that’s roughly 70 lifetimes behind the wheel—you would likely see far more serious crashes than Waymo has experienced to date.

[...] Waymo knows exactly how many times its vehicles have crashed. What’s tricky is figuring out the appropriate human baseline, since human drivers don’t necessarily report every crash. Waymo has tried to address this by estimating human crash rates in its two biggest markets—Phoenix and San Francisco. Waymo’s analysis focused on the 44 million miles Waymo had driven in these cities through December, ignoring its smaller operations in Los Angeles and Austin.

This is the wrong comparison. These are taxis, which means they're driving taxi miles. They should be compared to taxis, not normal people who drive almost exclusively during their commutes (which is probably the most dangerous time to drive since it's precisely when they're all driving).

We also need to know how often Waymo intervenes in the supposedly autonomous operations. The latest we have from this, which was leaked a while back, is that Cruise (different company) cars are actually less autonomous than taxis, and require >1 employee per car.

edit: The leaked data on human interventions was from Cruise, not Waymo. I'm open to self-driving cars being safer than humans, but I don't believe a fucking word from tech companies until there's been an independent audit with full access to their facilities and data. So long as we rely on Waymo's own publishing without knowing how the sausage is made, they can spin their data however they want.

edit2: Updated to say that ournalists should be more critical in general, not just about tech companies.

[-] [email protected] 43 points 1 year ago* (last edited 1 year ago)

I have worked at two different start ups where the boss explicitly didn't want to hire anyone with kids and had to be informed that there are laws about that, so yes, definitely anti-parent. One of them also kept saying that they only wanted employees like our autistic coworker when we asked him why he had spent weeks rejecting every interviewee that we had liked. Don't even get me started on people that the CEO wouldn't have a beer with, and how often they just so happen to be women or foreigners! Just gross shit all around.

It's very clear when you work closely with founders that they see their businesses as a moral good in the world, and as a result, they have a lot of entitlement about their relationship with labor. They view laws about it as inconveniences on their moral imperative to grow the startup.

1
submitted 1 year ago by [email protected] to c/[email protected]
44
submitted 1 year ago by [email protected] to c/[email protected]

It's so slow that I had time to take my phone out and take this video after I typed all the letters. How is this even possible?

1
submitted 1 year ago by [email protected] to c/[email protected]
[-] [email protected] 55 points 1 year ago

It's probably either waiting for approval to sell ads or was denied and they're adding more stuff. Google has a virtual monopoly on ads, and their approval process can take 1-2 weeks. Google's content policy basially demands that your site by full of generated trash to sell ads. I did a case study here, in which Google denied my popular and useful website for ads until I filled it with the lowest-quality generated trash imaginable. That might help clarify what's up.

[-] [email protected] 55 points 1 year ago* (last edited 1 year ago)

It's not a solution, but as a mitigation, I'm trying to push the idea of an internet right of way into the public consciousness. Here's the thesis statement from my write-up:

I propose that if a company wants to grow by allowing open access to its services to the public, then that access should create a legal right of way. Any features that were open to users cannot then be closed off so long as the company remains operational. We need an Internet Rights of Way Act, which enforces digital footpaths. Companies shouldn't be allowed to create little paths into their sites, only to delete them, forcing guests to pay if they wish to maintain access to the networks that they built, the posts that they wrote, or whatever else it is that they were doing there.

As I explain in the link, rights of way already exist for the physical world, so it's easily explained to even the less technically inclined, and give us a useful legal framework for how they should work.

1
submitted 1 year ago by [email protected] to c/[email protected]
14
submitted 1 year ago by [email protected] to c/[email protected]
51
submitted 2 years ago by [email protected] to c/[email protected]
1
submitted 2 years ago by [email protected] to c/[email protected]
1
submitted 2 years ago* (last edited 2 years ago) by [email protected] to c/[email protected]
1
submitted 2 years ago by [email protected] to c/[email protected]
21
submitted 2 years ago by [email protected] to c/[email protected]
[-] [email protected] 51 points 2 years ago

I do software consulting for a living. A lot of my practice is small organizations hiring me because their entire tech stack is a bunch of shortcuts taped together into one giant teetering monument to moving as fast as possible, and they managed to do all of that while still having to write every line of code.

In 3-4 years, I'm going to be hearing from clients about how they hired an undergrad who was really into AI to do the core of their codebase and everyone is afraid to even log into the server because the slightest breeze might collapse the entire thing.

LLM coding is going to be like every other industrial automation process in our society. We can now make a shittier thing way faster, without thinking of the consequences.

1
submitted 2 years ago by [email protected] to c/[email protected]
[-] [email protected] 44 points 2 years ago* (last edited 2 years ago)

That's a bad faith gotcha and you know it. My lemmy account, the comment I just wrote, and the entire internet you and I care about and interact with are a tiny sliver of these data warehouses. I have actually done sysadmin and devops for giant e-commerce company, and we spent the vast majority of our compute power on analytics for user tracking and advertising. The actual site itself was tiny compared to our surveillance-value-extraction work. That was a major e-commerce website you've heard of.

Bitcoin alone used half a percent of the entire world's electricity consumption a couple of years ago. That's just bitcoin, not even including the other crypto. Now with the AI hype, companies are building even more of these warehouses to train LLMs.

1
submitted 2 years ago by [email protected] to c/[email protected]
[-] [email protected] 48 points 2 years ago* (last edited 2 years ago)

I'd say less than a week. Capitalism is something that we have to wake up and make happen every single day. How many days worth of food does the average person have? Definitely not 45 days. People would have to start self-organizing within 2-3 days, and in doing so, they would actively make something that isn't capitalism, which directly challenges those in power.

This is why every time there are emergencies or protests, the media is obsessed with "looting." If there's no food because of a hurricane or whatever, it is every single person's duty to redistribute what there is equitably. The news and capitalists (but I repeat myself) call that "looting," even when it's a well-organized group of neighbors going into a closed store to distribute spoiling food to hungry people.

Rebecca Solnit writes about this in detail in A Paradise Built in Hell. It's really good. She's an awesome writer.

[-] [email protected] 57 points 2 years ago

The purpose of a system is what it does. "There is no point in claiming that the purpose of a system is to do what it constantly fails to do.” These articles about how social media is broken are constant. It's just not a useful way to think about it. For example:

It relies on badly maintained social-media infrastructure and is presided over by billionaires who have given up on the premise that their platforms should inform users

These platforms are systems. They don't have intent. There's no mens rea or anything. There is no point saying that social media is supposed to inform users when it constantly fails to inform users. In fact, it has never informed users.

Any serious discussion about social media must accept that the system is what it is, not that it's supposed to be some other way, and is currently suffering some anomaly.

[-] [email protected] 45 points 2 years ago* (last edited 2 years ago)

"Capitalism is just human nature."

If it's just human nature, then why do we need a militarized police force to enforce order? Having workers go to a workplace, do labor, and then send the profits to some far away entity that probably isn't even there is actually very far from human nature. It's something that necessarily requires the implied threat of violence to maintain. Same with tenants and landlords. No one would pay rent if it wasn't for the police, who will use violence to throw you out otherwise.

It also frustrates me how that argument just waves away the incredibly complex and actually extremely arbitrary legal structure of capitalism. What about human nature contains limited liability for artificial legal entities controlled by shareholders? "Ah yes, here's the part of the human genome that expresses preferred and common stock; here's the part that contains the innate human desire for quarterly earnings calls."

edit: typo

[-] [email protected] 45 points 2 years ago

Is that really all they do though? That's what theyve convinced us that they do, but everyone on these platforms knows how crucial it is to tweak your content to please the algorithm. They also do everything they can to become monopolies, without which it wouldn't even be possible to start on DIY videos and end on white supremacy or whatever.

I wrote a longer version of this argument here, if you're curious.

[-] [email protected] 58 points 2 years ago

This study is an agent-based simulation:

The researchers used a type of math called “agent-based modeling” to simulate how people’s opinions change over time. They focused on a model where individuals can believe the truth, the fake information, or remain undecided. The researchers created a network of connections between these individuals, similar to how people are connected on social media.

They used the binary agreement model to understand the “tipping point” (the point where a small change can lead to significant effects) and how disinformation can spread.

Personally, I love agent-based models. I think agent modeling is a very, very powerful tool for systems insight, but I don't like this article's interpretation, nor am I convinced the author of this article really groks what agent-based modeling really is. It's a very different kind of "study" than what most people mean when they use that word, and interpreting the insights is its own can of worms.

Just a heads up, for those of you casually scrolling by.

[-] [email protected] 60 points 2 years ago

The real problem with LLM coding, in my opinion, is something much more fundamental than whether it can code correctly or not. One of the biggest problems coding faces right now is code bloat. In my 15 years writing code, I write so much less code now than when I started, and spend so much more time bolting together existing libraries, dealing with CI/CD bullshit, and all the other hair that software projects has started to grow.

The amount of code is exploding. Nowadays, every website uses ReactJS. Every single tiny website loads god knows how many libraries. Just the other day, I forked and built an open source project that had a simple web front end (a list view, some forms -- basic shit), and after building it, npm informed me that it had over a dozen critical vulnerabilities, and dozens more of high severity. I think the total was something like 70?

All code now has to be written at least once. With ChatGPT, it doesn't even need to be written once! We can generate arbitrary amounts of code all the time whenever we want! We're going to have so much fucking code, and we have absolutely no idea how to deal with that.

view more: ‹ prev next ›

theluddite

0 post score
0 comment score
joined 2 years ago
MODERATOR OF