3
you are viewing a single comment's thread
view the rest of the comments
[-] tocano@piefed.social 15 points 14 hours ago

a relative time formatting library that contains no code

The library is two text files (code) that are processed by an LLM (interpreter) to generate code of another type. This is not that new in terms of workflow.

I think what makes this the worst is the fact that the author admits that you can't be sure the library will work until you generate the code and test it. Even then you cannot guarantee the security of the generated code and as you do not understand the code you also cannot give support or patch it.

When Performance Matters

If performance of a datetime processor is not relevant, what is? The author mentions they would like a browser implementation to be fast, documentable, fixable. However, operative systems, browsers, and other complex systems are made of little utilities like this that have very well documented functionalities and side effects.

But the above isn’t fully baked. Our models will get better, our agents more capable.

The whole assumption is that instead of creating a good stable base that anyone can use we should be just shtting out code until it works.

Eventually the hardware will be good enough to support a shitty bloated browser so we don't need to optimize it.

Eventually people will harden their PC enough so we shouldn't care about security.

[-] pinball_wizard@lemmy.zip 4 points 3 hours ago

Eventually people will harden their PC enough so we shouldn't care about security.

Yes. LoL. The saying applies "you can't fix stupid." Extra true when LLM generated code is running in increasingly higher criticality purposes.

[-] TehPers@beehaw.org 6 points 10 hours ago

The library is two text files (code) that are processed by an LLM (interpreter) to generate code of another type. This is not that new in terms of workflow.

I think what makes this the worst is the fact that the author admits that you can't be sure the library will work until you generate the code and test it. Even then you cannot guarantee the security of the generated code and as you do not understand the code you also cannot give support or patch it.

I've tried explaining how LLMs are not equatable to compilers/interpreters in the past, and it's usually to people who aren't in software roles. What it usually comes down to when I try to explain it is determinism. A compiler or interpreter deterministically produces code with some kind of behavior (defined by the source code). They often are developed to a spec, and the output doing the wrong thing is a bug. LLMs producing the wrong output is a feature. It's not something you try to fix, and something you often can't fix.

This, of course, ignores a lot of "lower level" optimizations someone can make about specific algorithms or data structures. I use "lower level" in quotes, of course, because those are some of the most important decisions you can make while writing code, but turning off your brain and letting a LLM do it for you "abstracts" those decisions away to a random number generator.

[-] tocano@piefed.social -1 points 9 hours ago* (last edited 9 hours ago)

I agree that LLM are made to be more exploratory, this is good as it allows them to experiment with more different topic, as opposed to always saying the same. However, I do not agree it is a feature for code generation, as you would need it to follow strict ruleset (code syntax, specification, tests). Whatever errors it generates and people accept are little mistakes in the threshold of acceptance for the person and a tradeoff for the cost of fixing the problem. In some contexts we see people focusing almost only on short term which leads to a lot of errors being allowed.

Moreover, you cannot say compilers are deterministic. There are situations where they are not (at least for the user).

https://krystalgamer.github.io/high-level-game-patches/

GCC's unwarranted behaviour

In order to keep the code as small as possible I was compiling the code with -Os. Everything was working fine until I started to remove some printfs and started to get some crashes. Moving function calls around also seemed to randomly fix the problem, this was an indication that somehow memory/stack corruption was happening. After a lot of testing, I figured out that if -O2/-O3/-Os were used then the problem would appear. The issue was caused by Interprocedural analysis or IPA. One of its functions is to determine whether registers are polluted across function calls and if not then re-use them.

[-] TehPers@beehaw.org 1 points 1 hour ago

Moreover, you cannot say compilers are deterministic. There are situations where they are not (at least for the user).

https://krystalgamer.github.io/high-level-game-patches/

I'm not following. Which part of this is nondeterministic?

The language being complicated to write and the compiler being confusing to use isn't an indicator of determinism. If GCC were truly nondeterministic, that'd be a pretty major bug.

Also, note that I mentioned that the output behavior is deterministic. I'm not referring to reproducible builds, just that it always produces code that does what the source specifies (in this case according to a spec).

[-] ignirtoq@feddit.online 2 points 3 hours ago

Determinism means performing the same way every time it is run with the same inputs. It doesn't mean it follows your mental model of how it should run. The article you cite talks about aggressive compiler optimizing causing unexpected crashes. Unexpected, not unpredictable. The author found the root cause and addressed it. Nothing there was nondeterministic. It was just not what the developer expected, or personally thought was an appropriate implementation, but it performed the same way every time. I think you keyed on the word "randomly" and missed "seemed to," which completely changes the meaning of the sentence.

LLMs often act truly nondeterministically. You can create a fresh session and feed it exactly the same prompt and it will produce a different output. This unpredictability is poison for producing a quality product that is maintainable with dynamic LLM code generation in the pipeline.

[-] tocano@piefed.social 2 points 48 minutes ago* (last edited 40 minutes ago)

I have talked with the author to confirm what he meant with this and other posts he made on compilation. He has confirmed that most (if not all) C compilers are not deterministic. He has pointed me to here as an example. He added that optimizations are not applied in deterministic order and when you add LTO it worsens the problem.

this post was submitted on 06 Feb 2026
3 points (56.0% liked)

Programming

25340 readers
215 users here now

Welcome to the main community in programming.dev! Feel free to post anything relating to programming here!

Cross posting is strongly encouraged in the instance. If you feel your post or another person's post makes sense in another community cross post into it.

Hope you enjoy the instance!

Rules

Rules

  • Follow the programming.dev instance rules
  • Keep content related to programming in some way
  • If you're posting long videos try to add in some form of tldr for those who don't want to watch videos

Wormhole

Follow the wormhole through a path of communities !webdev@programming.dev



founded 2 years ago
MODERATORS