174
you are viewing a single comment's thread
view the rest of the comments
[-] in_my_honest_opinion@piefed.social 131 points 1 week ago

Hahahaha this idiot thinks that it's the speed of our typing that ships code faster. He's in a knowledge shortage.

[-] teft@piefed.social 55 points 1 week ago

I’ve found that the people who understand these “agents” the least are the ones who are promoting them the most.

[-] mech@feddit.org 42 points 1 week ago* (last edited 1 week ago)

And everyone promotes them for tasks they aren't experts in.
Managers think they could replace devs, but never a manager.
Devs think they could replace management but never a senior developer.
Storyboard drawers think they can write screenplays. Screenplay writers think they can draw storyboards. Etc.
As an expert, you know how shit AI is in your own field, but surely those other jobs are simple enough to be replaced.

[-] Rivalarrival@lemmy.today 25 points 1 week ago

Let's be honest, though: they absolutely could replace management.

[-] mrgoosmoos@lemmy.ca 3 points 6 days ago

some management, sure. just like some of your coworkers could probably be replaced by AI. but not the competent ones, and not the essential ones.

and personally, I'd still rather work with an incompetent person who can improve than four incompetent chatbots

although I'd rather work with no incompetence at all

[-] Rivalarrival@lemmy.today 4 points 6 days ago

The principal task of a competent manager is, primarily, intervening between incompetent upper managers and actual workers. Replacing the incompetent manager removes the need for the competent one.

[-] cynar@lemmy.world 23 points 1 week ago

A good manager is both a coordinator and a filter. They deal with bs rolling down from above and keep their team running efficiently.

A good manager is worth their weight in gold. A bad manager isn't worth their weight in bullshit.

[-] clif@lemmy.world 5 points 6 days ago

I've also enjoyed the term "shit umbrella" for a good manager.

[-] Flamekebab@piefed.social 8 points 1 week ago

Yeah, our PM is great. Our previous one not so much.

He trusts us but also handles absolutely loads of stuff that we don't want to deal with.

It's very easy to replace something that was never critical to the process in the first place. My manager essentially updates my git tickets with what I did. We talk for 5 minutes a week. He just kinda lets me do my thing, I am fully aware of how lucky I am.

[-] panda_abyss@lemmy.ca 7 points 1 week ago

This. 

They’re incredibly useful, but you have to treat their output as disposable and untrustworthy. They’re reinforcement trained to generate a solution, regardless of if it’s right, because it’s impossible to AI evaluate that these solutions are correct at scale.

If you’re writing some core code: you can use an agent to review it, refactor parts, stump the original version, infill methods, and to run your test/benchmark scripts. 

but you still have to manage it, edit it, make sure it’s not recreating the same code in 6 existing modules, generating faked tests, etc. 


As an example this week on my side project I had Claude Opus write some benchmarks. Total throwaway code.

It actually took my input files, generated a static binary payload from it using numpy, and loaded that into my app’s memory (on its own that’s really cool), then it ran my one function and declared the whole system 100x faster than comparable libraries that parse the original data. Not a fair test at all, nor was it a useful test.

You cannot trust this software. 

You’ll see these games metrics, gamed tests, duplicate parallel implementations, etc. 

[-] CIA_chatbot@lemmy.world 4 points 1 week ago

They are also the ones who have super leveraged their portfolios with AI stocks -

this post was submitted on 30 Jan 2026
174 points (93.1% liked)

Programming Circlejerk

281 readers
1 users here now

Community to talk about enlightened programming takes

Rules:

founded 1 year ago
MODERATORS