36
top 9 comments
sorted by: hot top new old
[-] 14th_cylon@lemmy.zip 28 points 2 months ago

i have read it all hoping to find out what he is talking about... instead, the blog post ended 🤷‍♂️

[-] gtrcoi@programming.dev 3 points 2 months ago

I'm guessing he's alluding to a bunch of asserts, data sanitization, and granular error reporting. But yea, who knows.

[-] FishFace@piefed.social 22 points 2 months ago

The word you are looking for is "robust".

Debugging isn't the worst thing in programming. The worst thing is having a task you need to do and a solution already written, but not knowing how to use the solution to solve the task.

[-] 14th_cylon@lemmy.zip 13 points 2 months ago

The word you are looking for is “robust”.

As Taleb explains in his book, antifragility is fundamentally different from the concepts of resiliency (i.e. the ability to recover from failure) and robustness (that is, the ability to resist failure).

https://en.wikipedia.org/wiki/Antifragility

[-] FishFace@piefed.social 1 points 2 months ago

Uh huh. But fragile code is not (just) code that tends towards getting worse

[-] ferrule@sh.itjust.works 3 points 2 months ago

The issue is two fold.

First the scope of the project is very important. When I am working on a web app the most complicated project is still 90% boilerplate stuff. You write some RESTful code on some framework using CRUD and make a UI that draws based on data. No matter what you are making, lets be honest, it's not novel. This is why vibe coding can exist. Most of your unit tests can be derived from the types in your functions. Do a little bit of tracing through functions and AI can easily make your code less fragile.

When you are working on anything more complicated making code better requires you to actually grok the business requirements. Edge cases aren't as simple. The reasons for doing things a specific way aren't so superficial. Especially when you start having to write optimizations the compilers don't do automatically.

The second issue is learning matterial. The majority of the code we write is buggy. Not just in range testing but in solution to problems. There is a reason why we don't typically write once and never go back to our code.

Now think about when you, as a human, go back over old code. The commit log and blame usually don't give a great picture of why the change was needed. Not unless the dev was really detailed in their documentation. And even then it requires domain knowledge and conceptualization that AI still can't do.

When teaching humans to be be better at development we suck at it even when we can grok the language and the business needs. That is a hurdle we still need to cross with AI.

[-] MonkderVierte@lemmy.zip 1 points 2 months ago* (last edited 2 months ago)

For example, if I’m vibe-coding a quick web app with more JavaScript than I care to read

Ah, please don't publish that code then. It's a experiment and not something juniors should come to learn as "good enough".

[-] spireghost@lemmy.zip -5 points 2 months ago

Large language models can generate defensive code, but if you’ve never written defensively yourself and you learn to program primarily with AI assistance, your software will probably remain fragile.

This is the thesis of this argument, and it's completely unfounded. "AI can't create antifragile code" Why not? Effective tests and debug time checks, at this point, come straight from claude without me even prompting for it. Even if you are rolling the code yourself, you can use AI to throw a hundred prompts at it asking "does this make sense? are there any flaws here? what remains untested or out of scope that I'm not considering?" like a juiced up static analyzer

[-] TehPers@beehaw.org 9 points 2 months ago

Why not?

Are you asking the author or people in general? If the author didn't answer "why not" for you, then I can.

Yes, I've used Claude. Let's skip that part.

If you don't know how to write or identify defensive code, you can't know if the LLM generated defensive code. So in order for a LLM to be trusted to generate defensive code, it needs to do so 100% of the time, or very close to that.

You seem to be under the impression that Claude does so, but you presumably can tell if code is written with sufficient guards and tests. You know to ask the LLM to evaluate and revise the code. Someone without experience will not know to ask that.

Speaking now from my experience, after using Claude for work to write tests, I came out of that project with no additional experience writing tests. I had to do another personal project after that to learn the testing library we used. Had that work project given me sufficient time to actually do the work, I'd have spent some time learning the testing library we used. That was unfortunately not the case.

The tests Claude generated were too rigid. It didn't test important functionality of the software. It tested exact inputs/outputs using localized output values, meaning changing localizations was potentially enough to break tests. It tested cases that didn't need to be tested, like whether certain dependency calls were done in a specific order (those calls were done in parallel anyway). It wrote some good tests, but a lot of additional tests that weren't needed, and skipped some tests that were needed.

As a tool to help someone who already knows what they're doing, it can be useful. It's not a good tool for people who don't know what they're doing.

this post was submitted on 30 Nov 2025
36 points (86.0% liked)

Programming

25477 readers
561 users here now

Welcome to the main community in programming.dev! Feel free to post anything relating to programming here!

Cross posting is strongly encouraged in the instance. If you feel your post or another person's post makes sense in another community cross post into it.

Hope you enjoy the instance!

Rules

Rules

  • Follow the programming.dev instance rules
  • Keep content related to programming in some way
  • If you're posting long videos try to add in some form of tldr for those who don't want to watch videos

Wormhole

Follow the wormhole through a path of communities !webdev@programming.dev



founded 2 years ago
MODERATORS