26
submitted 4 days ago* (last edited 4 days ago) by [email protected] to c/[email protected]

Just ask it to rewrite the shitty code you wrote in a language you barely understand to "follow standard best practices in " or something like that and it will add advanced typing features, functional programming for iterables, advanced exception handling, proper concurrency handling, optimize control flows, use better equivalent functions, etc.

As long as you understand the foundations of these concepts in at least one language anybody can become pretty close to an expert in most languages instantly. Especially since most of them are C based and pretty similar

The output will sometimes change the logic but I mean that's pretty easy to catch and fix

Rip C++ nerds that memorize the entirety of each releases manual to shave off 3ms in every single function

all 45 comments
sorted by: hot top new old
[-] [email protected] 50 points 4 days ago

In the hands of a skilled craftsmen, the machine enhances the productive process of the craftsmen. In their shop, as part of their labor process, making bespoke things, the machine serves the craftsmen.

However, the machine also reduces the socially necessary labor time for the mass production of a given thing. The laborers within this production environment are not craftsmen. All they know is how to operate the machine. Making parts of an eventual whole they'll never have a full hand in producing.

As the laborers are replaced and the machine persists, there is less demand for the kills of a craftsmen, and the new laborers do not need to be trained at the same capacity as the craftsmen. This process naturally deskills the labor force as time progresses.

You find it useful because you are trained and can more effectively describe the issue its resolving because the code is the result of your skill and training. You are the craftsmen in this situation. Soon however, you will become the supervisor to juniors who have even less of any understanding then you had in their position. Producing code using a machine, with little understanding of it's output, leaving you to pick up the slack.

[-] [email protected] 27 points 4 days ago

The output will sometimes change the logic but I mean that's pretty easy to catch and fix

Lol. Lmao.

[-] [email protected] 2 points 1 day ago

Rest of the owl moment.

It's hard enough to follow the logic of something I wrote 5 months ago, let alone the logic of something I didn't even write that is likely incorrectly documented as well.

LLM code will pretty frequently write bad logic, then add a comment above that logic that says the following code does X when in reality it does Y. Not like humans don't do that too, but at least I can git blame a human...

[-] [email protected] 19 points 4 days ago

Writing code was never really the hard part of programming

[-] [email protected] 7 points 4 days ago

It never was but it inevitably took up a lot of your time anyways to learn the language and various frameworks

[-] [email protected] 20 points 4 days ago

What I really need is a steaming pile of new legacy code that might even work.

[-] [email protected] 21 points 4 days ago

In my experience, LLMs can often pump out perfectly fine starting code for very basic problems. If you're coding up some tiny blog it'll probably be good enough that someone with some coding experience can unfuck the places where it screwed up.

That's not what professional software engineering lenin-dont-laugh is about though. You want a codebase built in a coherent, consistent, repeatable way that can be independently worked on by dozens of people often at the same time, and LLMs cannot do that in any real capacity. It might serve as a decent tool for spinning up quick proof-of-concepts (I've poked it to figure out a new framework before, mostly due to terrible documentation making it difficult to figure out how to use specific features) but none of it was production-worthy and never would be.

Furthermore, if you're using it to figure out entire languages/frameworks for you, you're quickly finding yourself in a position where you don't see where it's fucking up, which is going to hit you down the line when you're playing whack-a-mole with a bug or severe performance issue in a giant codebase.

[-] [email protected] 5 points 4 days ago

I still think you are talking about an architect and principal engineering level - which AI is not going to replace for a long while. It’s a productivity multiplier and maybe will replace junior developers at most. But that still seems another year or so away if not longer

[-] [email protected] 3 points 4 days ago

I'm not advocating for using it to attempt and write a new feature end to end. Use it to help with your code function by function

It's great for syntax, not good for the where

[-] [email protected] 17 points 4 days ago

Llms are the affordable table saw, no, the affordable router table of the software world.

They’re a powerful tool, but they increase the chance that the user will fiddlefuck around doing something stupid at best or something brutally, nastily, scared straight in the classroom dangerous at worst.

If there ever was a time to “jack off” and get rid of any computer interaction in your life, now is it.

[-] [email protected] 20 points 4 days ago* (last edited 4 days ago)

Become “pretty close to an expert” by… outsourcing the process of improving your code to a machine…

Even if it improves your code in that scenario, you’re not going to really understand what it’s doing or why. You can use AI as a shortcut for scripting, but you can’t use it as a shortcut for learning

Edit: Besides, we already have perfectly good static analysis tools. Just use a linter. Trying to use AI as a linter will just be worse and unpredictable compared to using an actual linter

[-] [email protected] 1 points 4 days ago

But I'm not using it for learning. I already understand exception handling, concurrency, typing, etc.

But I only know the exact syntax for some languages

Now I can replicate the best practices for those concepts in a language I've never touched, and I can understand what it does because I know the equivalent syntax in another language and so I can also judge the quality as well

It's even more confident when the new language is C based because I'm already familiar with other C based languages

Obviously it'll never be as good as a person who spent time to learn the language by reading documentation and practicing but most cases it should be fine

[-] [email protected] 18 points 4 days ago

I already understand exception handling, concurrency, typing, etc.

But I only know the exact syntax for some languages

Now I can replicate the best practices for those concepts in a language I’ve never touched, and I can understand what it does because I know the equivalent syntax in another language and so I can also judge the quality as well

I'm sorry but this just doesn't match my experience. I have used greenlet, Node.js, asyncio, POSIX threads, kqueue, and uv and just recently I had to look at something that uses tokio (Rust) and I would never say confidently that just because I know the syntax of one concurrency library, that looking at a different language and equivalent library I can immediately judge the quality and understand what it does.

That is just not realistic

[-] [email protected] 11 points 4 days ago

Don't you know that all programming languages are the same, except each uses a different set of symbols and keywords? Since all languages are the same, we can use an LLM to efficiently translate code from one language into another where it will perform optimally. /s

[-] [email protected] 7 points 4 days ago

"Computer, replace all the whitespace indentation with curly braces and put a semicolon at the end of every line, in order to convert my Python program to Rust"

[-] [email protected] 1 points 3 days ago

Python also allows you to override what operators do meaning I can write

int.__add__ = lambda self, other: print("hello world")

>>> 1 + 2
'hello world'

And that's totally valid code. Don't think many other languages allow that, and translating that would be a mess.

[-] [email protected] 2 points 3 days ago* (last edited 3 days ago)

It's technically operator overloading, the wacky thing that Python allows you to do, is overload operators for base types like int which I'm not sure if other languages allow you to do for base types.

[-] [email protected] 2 points 2 days ago

That's what I meant, you can modify the behavior of code by directly over riding the operator implementation for base types. What it really reveals is that Python int is not at all a C int or really any other int.

Directly translating syntax without knowing that the Python type is so vastly different from say, the C type is a recipe for latent disaster.

[-] [email protected] 2 points 2 days ago
[-] [email protected] 1 points 2 days ago

My favorite weird Cpython implementation detail is that -5 to 256 are pre-cached when the interpreter is initialized. So identity checks using those numbers return True, but return False for other numbers:

>>> x = 1
>>> y = 1
>>> x is y
True

>>> x = 100000
>>> y = 100000
>>> x is y
False

At least in newer versions of Python it screams at you for doing identity checks with integers

[-] [email protected] 2 points 2 days ago* (last edited 2 days ago)

Yes, but that is a common optimization, caching primitive types and common values. I believe the JVM has that behavior, as well as a couple Ruby implementations (MRI, possibly YARV but it's been a while since I looked at Ruby implementations)

[-] [email protected] 2 points 1 day ago

Yeah, cashing the 8 bit values means using integer flags is a lot faster. Implementation details like that are always a kicker. Especially when Python's syntax kinda makes you want to say "val is 1" since it's more "human readable" than "val == 1".

This is actually abused for True and False too which are a subclass of integer with Sentinel values equal to 0 and 1.

Since they're technically different objects, True is 1 is False, but int(True) is 1 is True. True == 1 is True though.

Basically every language with an interpreter will do stuff like this, but it's usually not super well documented behavior as it's considered implementation detail or private API stuff.

[-] [email protected] 2 points 1 day ago* (last edited 1 day ago)

Python’s syntax kinda makes you want to say “val is 1”

I have absolutely had issues with this. My editor configuration python-mode raises an error about this via flake8 but ruff ~~doesn't~~ didn't raise an error about this (and I have enabled experimental settings) so my co-worker added code like this and it slipped through.

Ruff now has a rule for it but I think I was a very early adopter where this hadn't been implemented yet.

https://docs.astral.sh/ruff/rules/is-literal/

[-] [email protected] 1 points 1 day ago* (last edited 1 day ago)

Python ~~3.13~~ 3.8+ now spits out a warning to stdout when you do this, it was a big enough issue that they had to bake a warning into the CPython interpreter lol.

But because it's a big enough issue, it also means there's probably tons of examples online of people writing code this way and that'll be something you have to be actively watching for when using generated code.

The reasoning models would probably ping on some forum post about it, but because it's technically valid Python code, there's a good chance that it would see enough identity comparisons to integer literals to think it's okay.

The hard part of programming always ends up being the undocumented implementation details like that. Syntax is easy once you can understand the structure, but it so easy to write something that technically works and passes the tests that will eventually fail spectacularly and be hard to find.

[-] [email protected] 2 points 1 day ago

100% agree with everything you've said. I think it really illustrates how Python has not really done a good job around memory management.

By which I mean, it has done it well enough where you don't have to think about memory management normally and that actually ends up hurting you because the identity operator explicitly is about checking if two references point to the same object and people forget that, and then some interpreter optimizations around small integers make it so the is operator and equality operator behave the same for small values and people get a wrong impression about the identity operator.

I have the same problem with Python's "pass by assignment" way of doing things where it's not explicit enough for my liking. Don't get me wrong, I understand what it is doing after 20 years using the language but I sort of appreciate C like languages where references vs values are more explicit with things like ponters.

Zig has been interesting to me, for this reason.

[-] [email protected] 2 points 1 day ago

Haven't gotten into Zig, but it's on my list. I'm trying to get into Go and Rust now. Especially since I too have almost 20 years of using Python under my belt and hit those annoying "sometimes I just want to be explicit" moments.

That being said I will always come back to Python especially because I do now understand those pitfalls that can totally stump someone who's new to the language. The general syntax and elegance of generator expressions is addictive for small projects and quick tools. No boilerplate and a few comprehensions can do the work of a whole library (albeit quite a bit slower, but pretty frequently runtime is secondary to maintenance time).

I do like that type hinting is becoming the standard in Python though. I have absolutely abused the private apis for reading type hints in my code to get a sorta poor man's runtime type validation in mission critical (database) fuctions. I've used type hints with a decorator to build API endpoints as well. Hope that someday Python finally just commits and stops pretending to be lazy Rust and just allows you to opt into static typing.

[-] [email protected] 7 points 4 days ago* (last edited 4 days ago)

In case my edit didn’t land in time: what makes the AI approach better than using existing non-AI static analysis tools

[-] [email protected] 2 points 4 days ago* (last edited 4 days ago)

Well my personal experiences have just been that the ML approach catches a lot of things the static analysis tool hasn't. Those are hard coded by humans and there are dozens of not hundreds of ways to write any given function with identical logic. It's impossible for static analysis to be comprehensive enough to catch and fix a code block more than a few lines

E.g. I had a super ugly nested try catch exception block for this new integration test I was writing. It was using a test framework and language I've never used before, and so it was the only way I knew to write this logic. I asked the LLM to improve it and it broke up the nested try catch logic into 2 top level pieces with timeout functions and assertion checks I didn't know existed. The timeout removing the need to throw an exception and the assertion fixing the issue of catching it

[-] [email protected] 9 points 4 days ago

I’m glad you’ve gotten some actual use out of the LLMs! My outlook is more skeptical because I’ve seen too many interns get stuck on projects because they tried to ask LLMs for advice (which they did not double check) instead of reaching out to an experienced dev. The word calculators can only do so much.

[-] [email protected] 2 points 4 days ago

Oh don't get me wrong, I definitely think LLMs are gonna absolutely destroy kids ability to learn anything, including coding if they use it like a teacher

But for those who use it as a tool to build and do instead of learning, I'm quickly starting to become a strong believer in its usefulness

[-] [email protected] 19 points 4 days ago* (last edited 4 days ago)

You dont need an LLM to do this though. I thought this skill just comes naturally to programmers who have had a lot of experience in diverse areas.

I'm in Linux/OSS spaces and virtually no one uses an LLM here since much of the work is more social than technical (negotiating many ways to resolve an issue) with a lot more problem solving than busywork. At some point you have to get your hands dirty and there are no more shortcuts.

In fact, LLMs have only harmed us in that there are more bogus bug reports and garbage slop being tossed at projects not to mention every community git forge having to implement some form of ddos mitigation because of the very harmful and real negative externalities of LLMs.

[-] [email protected] 10 points 4 days ago

It's also literally just an auto-format/quick-fix feature in nearly every IDE, and was long before the AI craze.

[-] [email protected] 3 points 4 days ago

Ofc I don't NEED it but it saves me a ton of time

At work this means less risk of being fired for low performance

At home this means more time for chores and other interests

Those issues you mentioned are due to the users not understanding the limitations of LLMs and what they're good or bad. It's a tool that requires skill and knowledge to use well and unfortunately a lot of people treat it like it has understanding and reasoning like a human

[-] [email protected] 4 points 4 days ago

At work this means less risk of being fired for low performance

It sounds like you need a union more than you need an LLM.

At home this means more time for chores and other interests

With hobby coding, the journey usually matters more than the destination.

[-] [email protected] 3 points 4 days ago* (last edited 4 days ago)

With hobby coding, the journey usually matters more than the destination.

Not always. There are some projects I do for learning, some projects I do to improve my life

It sounds like you need a union more than you need an LLM.

I know but my average coworker has always been multi millionaires with property and half of them are Indian fascists, one quarter Chinese anti- communists, and almost all constantly fearmongering about the homeless in SF

[-] [email protected] 15 points 4 days ago

It won't kill off the need for skill, but a lot of lazy "coders" will stop even learning a minimal amount, and get by on no skill at all. Until they get fired, too.

Nothing the LLM generates can be relied on, it's a gibberish machine.

[-] [email protected] 3 points 4 days ago

agree with OP but its more than a gibberish machine for most modern implementations. copilot agent mode is good

[-] [email protected] 8 points 4 days ago

geordi-no me writing spaghetti code

geordi-yes AI writing spaghetti code

[-] [email protected] 5 points 4 days ago

hand pulled vs. mechanically extruded pasta

[-] [email protected] 9 points 4 days ago* (last edited 4 days ago)

This is only really useful in low expressiveness languages where there is not a huge set of language enhancements possible through libraries. Think Java exception handling for example.

In essence it works if you "best practices" are things like don't use switch statements.

It doesn't work if you best practices are things like use Result<T, E> from this functional result library.

Essentially LLMs don't really work "at scale" if you need anything more complicated than what the average internet tutorial code is in your language.

Same with perf.

Also this only works 60% of all the time though if that, so the more requirements you pile on the less likely it will hit all of them properly.

[-] [email protected] 4 points 4 days ago

you're probably onto something but I feel like this isn't gonna stop employers from requiring people to have x years experience with whatever specific stack they're using

[-] [email protected] 6 points 4 days ago

I've never used it for coding because the last time I was paid to write code it was updating a legacy codebase into Java SE 6, but last night I discovered my first real use case for LLMs, naming my farm and animals in a new Story of Seasons play through.

[-] [email protected] 2 points 4 days ago

You can write FORTRAN in any language.

this post was submitted on 29 May 2025
26 points (81.0% liked)

technology

23791 readers
295 users here now

On the road to fully automated luxury gay space communism.

Spreading Linux propaganda since 2020

Rules:

founded 4 years ago
MODERATORS