this post was submitted on 17 Mar 2025
1259 points (99.8% liked)
Programmer Humor
34426 readers
917 users here now
Post funny things about programming here! (Or just rant about your favourite programming language.)
Rules:
- Posts must be relevant to programming, programmers, or computer science.
- No NSFW content.
- Jokes must be in good taste. No hate speech, bigotry, etc.
founded 5 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
An otherwise meh article concluded with "It is in everyone’s interest to gradually adjust to the notion that technology can now perform tasks once thought to require years of specialized education and experience."
Much as we want to point and laugh - this is not some loon's fantasy. This is happening. Some dingus told spicy autocomplete 'make me a database!' and it did. It's surely as exploit-hardened as a wet paper towel, but it functions. Largely as a demonstration of Kernighan's law.
This tech is borderline miraculous, even if it's primarily celebrated by the dumbest motherfuckers alive. The generation and the debugging will inevitably improve to where the machine is only as bad at this as we are. We will be left with the hard problem of deciding what the software is supposed to do.
The years of specialized education and experience is not for writing code in and of itself. Anyone with an internet connection can learn to do that in not that long. What takes years to perfect is writing reliable, optimized, secure code, communicating and working efficiently with others, writing code that can be maintained by others long after you leave, knowing the theories behind why code written in a certain way works better than code written in some other way, and knowing the qualitative and quantitative measures to even be able to assess whether one piece of code is "better" than the other. Source: Self-learned programming, started building stuff on my own, and then went through an actual computer science program. You miss so much nuance and underlying theory when you self-learn, which directly translates bad code that's a nightmare to maintain.
Finally, the most important thing you can do with the person that has years of specialized education and experience is you can actually have a conversation with them about their code, ask them to explain in detail how it works and the process they used to write it. Then you can ask them followup questions and request further clarification. Trying to get AI to explain itself is a complete shitshow, and while humans do have a propensity to make shit up to cover their own/their coworkers' asses, AI does that even when it make no sense not to tell the truth because it doesn't really know what "the truth" is and why other people would want it.
Will AI eventually catch up? Almost certainly, but we're nowhere close to that right now. Currently it's less like an actual professional developer and more like someone who knows just enough to copy paste snippets from Stack Overflow and hack them together into a program that manages to compile.
I think the biggest takeaway with AI programming is not that it can suddenly do just as well as someone with years of specialized education and experience, but that we're going to get a lot more shitty software that look professional on the surface, but is a dumpster fire inside.
Same. Starting with QBASIC, no less, which is an excellent source of terrible practices. At one point I created a code snippet that would perform a division and multiplication to find the remainder, because I'd never heard of modulo. Or functions.
Right now, this lets people skip the hair-pulling syntax errors, and tell the computer what they think the program should be doing, in plain English. It's not even "compileable pseudocode." It's high-level logic, nearly to the point that logic errors are all that can remain. It desperately needs some non-answer feedback states for if you tell it to "implement MP4 encoding" and expect that to Just Work.
But it's teaching people to write the comments first.
The distance from here to "oh shit" is shorter than we'd prefer. This tech works like a joke. "Chain of thought" apparently means telling the robot to act smarter... and it does. Which is almost less silly than Stable Diffusion removing every part of the marble that doesn't look like Hatsune Miku. If it's stupid, but it works... it's still stupid. But it works.
Someone's gonna prompt "Write like Donald Knuth" and the robot's gonna go, "Oh, you wanted good code? Why didn't you say so."
This industry also spends most of it's money either changing things that don't need to change (we optimized the right click menu to remove this item, mostly to fuck your muscle memory) or to avoid changing things (rather than implementing 2fa, banks have implemented 58372658 distinct algorithms for detecting things that might be fraud).
If you're just talking about enabling small scale innovation you're probably right, but if you're talking about the industry as a whole I think you need to look at what people in industry are actually spending their time on.
it's not code.
Yeah, I've been using it heavily. While someone without technical knowledge will surely allow AI to build a highly insecure app, people with more technological knowledge are going to propel things to a level where the less tech savvy will have fewer and fewer pitfalls to fall into.
For the past two months, I've been leveraging AI to build a CUE system that takes a user desire (e.g. "i want to deploy a system with an app that uses a database and a message queue" expressed as a short json) and converts a simple configuration file that unpacks into all the kubernetes manifests required to deploy the system they want to deploy.
I'm trying to be fully shift-left about it. So, even if the user's configuration is as simple as my example, it should still use CUE templating to construct the files needed for a full DevSecOps stack - Ingress Controller, KEDA, some kind of logging such as ELK stack, vulnerability scanners, policy agents, etc. The idea is the every stack should at all times be created in a secure state. And extra CUE transformations ensure that you can split the deployment destinations in any type of way, local/onprem, any cloud provider, or any combination thereof.
The idea is that if I need to swap out a component, I just change one override in the config and the incoming component already knows how to connect to everything and do what the previous component was doing because I've already abstracted the component's expected manifest fields using CUE. So, I'd be able to do something like changing my deployment from one cloud to another with a click of a button. Or build up a whole new fully secure stack for a custom purpose within a few minutes.
The idea is I could use this system to launch my own social media app, since I've been planning the ideal UX for many years. But whether or not that pans out, I can take my CUE system and put a web interface over it to turn it into a mostly automated PaaS. I figure I could undercut most PaaS companies and charge just a few percentage points above cost (using OpenCost to track the expenses). If we get to the point where we have a ton of novices creating apps with AI, I might be in a lucrative position if I have a PaaS that can quickly scale and provide automated secure back ends.
Of course, I intend on open sourcing the CUE once it's developed enough to get things off the ground. I'd really love to make money from my creative ideas on a socialized media app that I create, am less excited about gatekeeping this kind of advancement.
Interested to know if anyone has done this type of project in the past. Definitely wouldn't have been able to move at nearly this speed without AI.
so, MASD(or MDE) then ?
I've never heard of this before, but you're right that it sounds very much like what I'm doing. Thank you! Definitely going to research this topic thoroughly now to make sure I'm not reinventing the wheel.
Based on the sections in that link, I wondered if the MASD project was more geared toward the software dev side or devops. I asked Google and got this AI response:
So (if accurate), it sounds like all the modernized automation of CI/CD, IaC, and GitOps that I know and love are already engaging in MAD philosophy. And what I'm doing is really just providing the last puzzle piece to fully automate stack architecting. I'm guessing the reason it doesn't already exist is because a lot of the open source tools I'm relying on to do the heavy lifting inside kubernetes are themselves relatively new. So, hopefully this all means I'm not wasting my time lol
AFAICT MASD is an iteration on MDE which incorporates parts of MAD but not in a direct fashion.
Lots of acronyms there.
These types of systems do exist, they just aren't mainstream because there hasn't been a version of them that could be easily used for general development outside of the specific mid-level niches they are built in.
I think it's the goal, but I've not seen anything come close yet.
Admittedly I'm not an authority so it may just be me missing the important things.
Thanks for the info. When I searched MASD, it told me instead about MAD, so it's good to know how they're differentiated.
This whole idea comes from working in a shop where most of their DevSecOps practices were fantastic, but we were maintaining fleets of Helm charts (picture the same Helm override sent to lots of different places with slightly different configuration). The unique values for each deployment were buried "somewhere" in all of these very lengthy values.yaml override files. Basically had to did into thousands of lines of code whenever you didn't know off-hand how a deployment was configured.
I think when you're in the thick of a job, people tend to just do what gets the job done, even if it means you're going to have to do it again in two weeks. We want to automate, but it becomes a battle between custom-fitting and generalization. With the tradeoff being that generalization takes a lot of time and effort to do correctly.
So, I think plenty of places are "kind of" at this level where they might use CUE to generalize but tend to modify the CUE for each use case individually. But many DevOps teams I suspect aren't even using CUE, they're still modifying raw yaml. I think of yaml like plumbing. It's very important, but best not exposed for manual modification unless necessary. Mostly I just see CUE used to construct and deliver Helm/kubernetes on the cluster, in tools like KubeVela and Radius. This is great for overriding complex Helm manifests with a simple Application .yaml, but the missing niche I'm trying to fill is a tool that provides the connections between different tools and constrains the overall structure of a DevSecOps stack.
I'd imagine any company with a team who has solved this problem is keeping it proprietary since it represents a pretty big advantage at the moment. But I think it's just as likely that a project like this requires such a heavy lift before seeing any gain that most businesses simply aren't focusing on it.
My experiences are similar to yours, though less k8's focused and more general DevSecOps.
This is mentioned in the link as "Barely General Enough" I'm not sure i fully subscribe to that specific interpretation but the trade off between generalisation and specialisation is certainly a point of contention in all but the smallest dev houses (assuming they are not just cranking hard coded one-off solutions).
I dislike the yaml syntax, in the same way i dislike python, but it is pervasive in the industry at the moment so you work with that you have.
I don't think yaml is the issue as much as the uncontrolled nature of the usage.
You'd have the same issue with any format as flexible to interpretation that was being created/edited by hand.
As in, if the yaml were generated and used automatically as part of a chain i don't think it'd be an issue, but it is not nearly prescriptive enough to produce the high level kind of model definitions further up the requirements stack.
Which then brings use back to the generalisation vs specialisation argument, do you create a hyper-specific dsl that allows you only to define things that will work within the boundaries of what you want, does that mean it can only work in those boundaries or do you introduce more general definitions and the complexity that comes with that.
Whether or not the solution is another layer of abstraction into a different format or something else entirely i'm not sure, but i am sure that raw yaml isn't it.
Yes, I think yaml's biggest strength is also its built-in flaw: its flexibility. Yaml as a data structure is built to be so open-ended that it can be no surprise when every component written in Go and using Yaml as a data structure builds their spec in a slightly different way, even when performing the exact same functions.
That's why I yearned for something like CUE and was elated to discover it. CUE provides the control that yaml by its very nature cannot enforce. I can create CUE that defines the yaml structure in general so anything my system builds is valid yaml. And I can create a constraint which builds off of that and defines the structure of a valid kubernetes manifest. Then, when I go to define the CUE that builds up a KubeVela app I can base its constraints on those k8s constraints and add only KubeVela-specific rules.
Then I have modules of other components that could be defined as KubeVela Applications on the cluster but I define their constraints agnostically and merge the constraint sets together to create the final yaml in proper KubeVela Application format. And if the component needs to talk to another component, I standardize the syntax of the shared function and then link that function up to whatever tool is currently in use for that purpose.
I think it's a good point that overgeneralization can and does occur and my "one size fits all" approach might not actually fit all. But I'm hoping that if I finish this tool and shop it to a place that thinks it's overkill, I can just have them tell me which parts they want generalized and define a function to export a subset of my CUE for their needs. And in that scenario, I would flip and become a big proponent of "Just General Enough". Because then, they can have the streamlined fit-for-purpose system they desire and I can have the satisfaction of not having to do the same work over and over again.
But the my fear about going down that road is that it might be less of an export of a subset of code and more of building yet another system that can MAD-style generate my whole CUE system for whatever level of generalization I want. As you say, it just becomes another abstraction layer. Can't say I'm quite ready to go that far 😅