[-] BranBucket@lemmy.world 1 points 16 hours ago

If you want to get into how this happens, and the way it happens with other technologies, I'd suggest Neil Postman's Technopoly and Amusing Ourselves To Death as a good start.

[-] BranBucket@lemmy.world 4 points 1 day ago* (last edited 1 day ago)

And I hate it these days. I really do.

I understand why the better creators make their videos the way they do. I understand why there are channels that just churn out hundreds of low effort vids every day. I get it. At the same time, even the things that are considered quality content on YouTube just don't appeal to me anymore.

People send me links and I can hardly be bothered to watch them, let alone browse for hours.

Oh well.

[-] BranBucket@lemmy.world 2 points 3 days ago* (last edited 3 days ago)

Oof. So little faith in your fellow man.

Guilty as charged. I often wonder what effect dealing with quality control and safety has on my mentality. Much like first responders see a lot of people at their worst, I see a lot of them at their dumbest and laziest.

I think we're still at a net gain over where we were in 1906, but that's subjective. Most of us live longer and more comfortable lives, but that could change if we're not careful, and I don't think we're being particularly careful in this decade. I'm a bit pessimistic, but I don't see it as a bad thing. Back on aviation, the old saying is that it takes an optimist to invent the airplane, and it takes a pessimist to invent the parachute.

I'd rather keep meteors out of it. Some of the planet is quite pretty and whatever species takes over for us might appreciate the view.

[-] BranBucket@lemmy.world 1 points 3 days ago

Yup. I'll take the bet.

After all, your expectation of the impact of AI is arguably the better outcome for humanity, isn't it? I'm expecting a sharp increase in horrific industrial accidents and the slow but steady regression of human intellect until we're all mindless drones from sector 7-C. =P

That's a good bet to lose.

Besides, actually paying out on oddball, five year old bets is the kind of thing that made the pre-social media, pre-AI internet great, and I miss that.

[-] BranBucket@lemmy.world 1 points 3 days ago* (last edited 3 days ago)

If I'm arguing in good faith, it's both. We have a tool that uses us, a medium that shoves massive amounts of information at us but hinders gaining knowledge (which I'm going to say is the useful retention and application of that information, and not just for winning trivial night) and as a species we refuse to not let ourselves be suckered by it.

In the same vein, Postman also argued that this sort of change is often both ongoing and inevitable, and the only real debate was on what the true cost to our culture and society will be. He sited examples going back to Plato if I remember correctly. So as you put it, writing did it, books, television, search engines, etc. And so much money has been spent on making this a thing that we're going to have to contend with it until it undeniably starts costing more than it's worth, and if that cost is cultural or societal instead of financial, it might never go away.

I suspect there’s a bigger issue here than “LLM bad”. We’ve been drifting toward shallow, instant-answer information consumption for years. AI just slots neatly into a pattern that already existed.

I don't pretend to speak for the man, but I think Postman would agree with you, and he thought it started in the 1860's with the telegraph.

[-] BranBucket@lemmy.world 2 points 3 days ago

Right. We can't blame the existence or even the use of AI fully. But the way AI is often used, and the way my armchair studies of human nature tell me it will continue to be used, I think will lead to more events like this. The trend of easy access and low retention did indeed start before LLMs, but they don't seem to be a remedy for it from what I can tell. If anything, they're neutral, and I'd argue they make it worse.

We could (and frankly probably will need to because I doubt AI will be abandoned due to the sheer volume of cash that has been dumped into it), build processes to account for the failings of LLMs and the failings in how we use them. Or we could look at existing methods, those we understand and have learned to work effectively with, and reapply them as needed.

My bet is that LLMs and genAI will exacerbate the trend of being info rich and knowledge poor, and the processes we have to create in order to safely and effectively apply it are going to be more costly than any efficiency we get out of adopting it. I could be wrong, but I'd bet you a six-pack of whatever you drink that I'm not. Collectable in five years, if Lemmy hasn't been replaced by LLMmy by then. I'll even ship international if need be. =)

[-] BranBucket@lemmy.world 1 points 3 days ago* (last edited 3 days ago)

You're right that we can't rule out complacency and human error. And we have internal reviews precisely to account for complacency. Again, I'm intimately familiar with both the safety culture and people involved, this is an unusual and recent development. But I suppose asking you to take my word for it might strain credulity. That is what it is.

I'd be inclined to agree with you more if it weren't for how widespread the smaller issues are. The general trend, among the old and young, is less actual knowledge of the job and more reliance on quick access to information that often isn't applied properly in context. It existed before AI, and has gotten worse with it's introduction. Something about instant access to information seems to harm retention and application of that info. Pretty obvious trend for me, as part of my job is to ensure it's retained and applied properly.

Those procedures built around autopilot, along with other issues of flying more complicated modern aircraft, were dealt with by controlling how information flowed, how it was communicated, and the weight of authority it was given, often with human processes like Crew Resource Management. As I've said, the presentation of information absolutely changes how people understand and apply it. CRM helps because it prompts people to present information to each other in a way that facilitates better decision making and delegation in a crisis.

But autopilot has always been beneficial, right from the start it was obvious. It reduces pilot fatigue on long-haul flights and helps keep air traffic in the right place. Pilot complacency was never really a worry, but malfunctions were.

In the end, it's not that it can't be done. We could adjust our processes to include LLMs simply because people think they're neat. It's just that there's no compelling evidence that its better for distributing information, developing procedures, or teaching people how not to die.

[-] BranBucket@lemmy.world 2 points 3 days ago

I would ask it a careful question, and I would get a well worded, persuasive, but ultimately careless reply that's just repetition of information and devoid of any new reasoning or insight.

I would carefully ruminate on this reply, and find that at best, it's factually correct because it's an echo of the training data fed into the model, and although it sounds highly persuasive, it likely will need additional work to be adapted into the specific context and details of my situation.

But, that's not my main complaint. My complaint is that medium used seems to prevent people from doing that analysis. I think this is very much in line with what Neil Postman wrote about in Amusing Ourselves To Death and Technopoly. These tools seem to use us, sneakily adjusting our perceptions of what the information means, rather than us using the tools.

Is it possible to be careful and use it the way you describe in your thought experiment? Yes. Is it likely that people will be? No, and we seem to be seeing example after example of that every day.

[-] BranBucket@lemmy.world 4 points 3 days ago* (last edited 3 days ago)

It's not that I don't think there aren't legitimate uses for AI or that it could be used as a learning tool.

It's that I doubt it's better than current learning tools largely because the nature of the medium seems to turn off the kind of critical thinking you're describing. The medium and language of a message can have a profound effect on how we understand and process information, often without us even realizing it, and AI seems to be able to make those changes far too easily.

[-] BranBucket@lemmy.world 33 points 4 days ago* (last edited 4 days ago)

I feel like this is a progression of a trend I've been railing against for a while. My workplace has to contend with a massive amount of ever-changing regulatory and engineering information. There are thousands of pages of documents, with differing levels of authority and detail, governing all aspects of what we do.

I've been begging people to read the docs. Don't just ask your manager or predecessor, don't just skim through it, and for fuck's sake don't ctrl+f until you find something that looks good and run with it out of context. Treating this sort of research like a Google search is killing us during compliance inspections. Read the docs!

Shit changes, often. I have to constantly remind them, it's not what the docs said last year. It's what they say now. Know your responsibilities, know where to find the info that pertains to them, and review it often. Read it, know it, or at least know where to find it.

It's getting worse. I've seen experienced people submit supplemental documents with egregious errors after they "just used AI for grammar checking". I've seen proposed policy docs with references to regulations that are decades out of date. I've gotten questions about implementing things that were outlawed or obsolete before I was born, and I've been around a looooong while.

We can't meat puppet our way through this, blindly following AI, or people are going to die in horrible industrial accidents. I mean that literally. People will be killed. This is why we have the current mass quanties of regulatory documents, to prevent people from literally dying in awful ways.

I'm to old for this shit.

[-] BranBucket@lemmy.world 6 points 4 days ago

A friend of mine's first assignment as a senior engineer was to find ways to eliminate more moving parts and metal fasteners from cheaper spec products, because removing a dozen two cent screws would save the company tens of millions over the life of the design. Not just in parts, but because they're more complicated and take longer to install than just snapping and glueing a plastic shell together.

With the scale of manufacturing at companies like GM and Ford, saving a few thousand per car on parts and labor with a touchscreen infotainment system is a massive, massive amount of money. The R&D costs of converting from knobs to touchscreens would probably be covered in the first few months.

[-] BranBucket@lemmy.world 36 points 4 days ago* (last edited 4 days ago)

Corporations pay stagnant wages, raise prices, funnel money out of the economy to shareholders who hoard wealth, and then get worried when there's no one left who can buy their products?

Tell me again why we think C-Suite folks are smart?

Right, because they'll get bailed out again and stay rich. That's why.

It's a god damn disgrace.

I'm sure someone will come around and tell me how complicated economics is and why we should trust business and industry leaders who went to school for this sort of thing, like basic pattern recognition and common sense couldn't have predicted that people who can barely afford groceries would stop buying cars...

Fuck.

589
submitted 1 month ago by BranBucket@lemmy.world to c/memes@lemmy.world
222

SMP Selle TRK medium. Super comfy. Best decision I've made since buying the bike.

view more: next ›

BranBucket

0 post score
0 comment score
joined 2 years ago