4
Fun With Dada (smallcultfollowing.com)
1
Feeding the Data Deity (reflectingwide.blogspot.com)
1
3
133
4
3
Updates and Bot Wars (marisabel.nl)
104
I Am Happier Writing Code by Hand (www.abhinavomprakash.com)
2

This is the paradox: AI reduces the cost of production but increases the cost of coordination, review, and decision-making. And those costs fall entirely on the human.

15
Systems Thinking (theprogrammersparadox.blogspot.com)
5
Testing can be fun, actually (giacomocavalieri.me)
42
[-] codeinabox@programming.dev 19 points 2 days ago

Instead, most organisations don’t tackle technical debt until it causes an operational meltdown. At that point, they end up allocating 30–40% of their budget to massive emergency transformation programmes—double the recommended preventive investment.

I can very much relate to this statement. Many contracts I've worked on in the last few years, have been transformation programmes, where an existing product is rewritten and replatformed, often because of the level of tech debt in the legacy system.

[-] codeinabox@programming.dev 1 points 2 days ago

I originally shared this after stumbling upon it in one of Martin Fowler's posts.

The article reminds me of how my mother used to buy dress patterns, blueprints if you will, for making her own clothes. This no code library is much the same, because it offers blueprints if you wanted to build your own implementation.

So the thing that interests me is what has more value - the code or the specifications? You could argue in this age of AI assisted coding that code is cheap but business requirements still involve a lot of effort and research.

To give a non-coding example, I've been wanting to get some cupboards built, and every time I contact a carpenter about this, it's quite expensive to get something bespoke made. However, if I could buy blueprints that I could tweak, then in theory, I could get a handyman to build it for a lower cost.

This is a very roundabout way of saying I do think there are some scenarios where the specifications would be more beneficial than the implementation.

[-] codeinabox@programming.dev 27 points 1 week ago

I am not surprised that there are parallels between vibe coding and gambling:

With vibe coding, people often report not realizing until hours, weeks, or even months later whether the code produced is any good. They find new bugs or they can’t make simple modifications; the program crashes in unexpected ways. Moreover, the signs of how hard the AI coding agent is working and the quantities of code produced often seem like short-term indicators of productivity. These can trigger the same feelings as the celebratory noises from the multiline slot machine.

[-] codeinabox@programming.dev 57 points 2 weeks ago

I think the most interesting, and also concerning, point is the eighth point, that people may become busier than ever.

After guiding way too many hobby projects through Claude Code over the past two months, I’m starting to think that most people won’t become unemployed due to AI—they will become busier than ever. Power tools allow more work to be done in less time, and the economy will demand more productivity to match.

Consider the advent of the steam shovel, which allowed humans to dig holes faster than a team using hand shovels. It made existing projects faster and new projects possible. But think about the human operator of the steam shovel. Suddenly, we had a tireless tool that could work 24 hours a day if fueled up and maintained properly, while the human piloting it would need to eat, sleep, and rest.

In fact, we may end up needing new protections for human knowledge workers using these tireless information engines to implement their ideas, much as unions rose as a response to industrial production lines over 100 years ago. Humans need rest, even when machines don’t.

This does sound very much like what Cory Doctorow refers to as a reverse-centaur, where the developer's responsibility becomes overseeing the AI tool.

[-] codeinabox@programming.dev 12 points 4 weeks ago

This article is quite interesting! There are a few standout quotes for me:

On one hand, we are witnessing the true democratisation of software creation. The barrier to entry has effectively collapsed. For the first time, non-developers aren’t just consumers of software - they are the architects of their own tools.

The democratisation effect is something I've been thinking about myself, as hiring developers or learning to code doesn't come cheap. However, if it allows non-profits to build ideas that can make our world a better place, then that is a good thing.

We’re entering a new era of software development where the goal isn't always longevity. For years, the industry has been obsessed with building "platforms" and "ecosystems," but the tide is shifting toward something more ephemeral. We're moving from SaaS to scratchpads.

A lot of this new software isn't meant to live forever. In fact, it’s the opposite. People are increasingly building tools to solve a single, specific problem exactly once—and then discarding them. It is software as a disposable utility, designed for the immediate "now" rather than the distant "later."

I've not thought about it in this way but this is a really good point. When you make code cheap, it makes it easier to create bespoke short-lived solutions.

The real cost of software isn’t the initial write; it’s the maintenance, the edge cases, the mounting UX debt, and the complexities of data ownership. These "fast" solutions are brittle.

Though, as much as these tools might democratise software development, they still require engineering expertise to be sustainable.

[-] codeinabox@programming.dev 17 points 1 month ago

Thank you! I've added the image to the post as well.

[-] codeinabox@programming.dev 67 points 1 month ago

I use AI coding tools, and I often find them quite useful, but I completely agree with this statement:

And if you think of LLMs as an extra teammate, there's no fun in managing them either. Nurturing the personal growth of an LLM is an obvious waste of time.^___^

At first I found AI coding tools like a junior developer, in that it will keep trying to solve the problem, and never give up or grow frustrated. However, I can't teach an LLM, yes I can give it guard rails and detailed prompts, but it can't learn in the same way a teammate can. It will always require supervision and review of its output. Whereas, I can teach a teammate new or different ways to do things, and over time their skills and knowledge will grow, as will my trust in them.

[-] codeinabox@programming.dev 26 points 1 month ago

My understanding of how this relates to Jevons paradox, is because it had been believed that advances in tooling would mean that companies could lower their headcount, because developers would become more efficient, however it has the opposite effect:

Every abstraction layer - from assembly to C to Python to frameworks to low-code - followed the same pattern. Each one was supposed to mean we’d need fewer developers. Each one instead enabled us to build more software.

The meta-point here is that we keep making the same prediction error. Every time we make something more efficient, we predict it will mean less of that thing. But efficiency improvements don’t reduce demand - they reveal latent demand that was previously uneconomic to address. Coal. Computing. Cloud infrastructure. And now, knowledge work.

[-] codeinabox@programming.dev 21 points 1 month ago

Based on my own experience of using Claude for AI coding, and using the Whisper model on my phone for dictation, for the most part AI tools can be very useful. Yet there is nearly always mistakes, even if they are quite minor at times, which is why I am sceptical of AI taking my job.

Perhaps the biggest reason AI won't take my job is it has no accountability. For example, if an AI coding tool introduces a major bug into the codebase, I doubt you'd be able to make OpenAI or Anthropic accountable. However if you have a human developer supervising it, that person is very much accountable. This is something that Cory Doctorow talks about in his reverse-centaur article.

"And if the AI misses a tumor, this will be the human radiologist's fault, because they are the 'human in the loop.' It's their signature on the diagnosis."

This is a reverse centaur, and it's a specific kind of reverse-centaur: it's what Dan Davies calls an "accountability sink." The radiologist's job isn't really to oversee the AI's work, it's to take the blame for the AI's mistakes.

[-] codeinabox@programming.dev 24 points 2 months ago

This quote from the article very much sums up my own experience of Claude:

In my recent experience at least, these improvements mean you can generate good quality code, with the right guardrails in place. However without them (or when it ignores them, which is another matter) the output still trends towards the same issues: long functions, heavy nesting of conditional logic, unnecessary comments, repeated logic – code that is far more complex than it needs to be.

AI coding tools definitely helpful with boilerplate code but they still require a lot of supervision. I am interested to see if these tools can be used to tackle tech debt, as often the argument for not addressing tech debt is a lack of time, or if they would just contribute it to it, even with thorough instructions and guardrails.

[-] codeinabox@programming.dev 9 points 3 months ago

A good companion piece to this article, is the Dead Framework Theory article, which discusses AI coding tools bolstering React's dominance.

[-] codeinabox@programming.dev 10 points 3 months ago

Exactly but generative AI has exacerbated the problem

What is new is the scale of the problem being created as lightning-speed code generators spew reams of unread code into millions of projects

view more: next ›

codeinabox

0 post score
0 comment score
joined 4 months ago
MODERATOR OF