Want to wade into the snowy surf of the abyss? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid.
Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.
Any awful.systems sub may be subsneered in this subthread, techtakes or no.
If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.
The post Xitter web has spawned so many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)
Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.
(Credit and/or blame to David Gerard for starting this. If you're wondering why this went up late, I was doing other shit)
I just had one of those "brain-doing-brain-stuff-good" moments (I think normal people call them delusions?) pondering about why it is that AI code extruders are seeing widening adoption.
tl;dr - there's a bunch of people uncurious about the nature of the abstractions they use and it's a tragedy.
First a moment of background: My first software dev position was using Lisp and one of the most powerful concepts built into the language runtime was the macro facility, the ability to write code that writes code. The main downsides of Lisp are obsequious Lisp developers and hard-to-master C foreign function interfaces, so what you have is a toolchain of abandoned dependencies made by some real annoying characters, but I digress. The ability to write code that writes code is a powerful concept.
I moved on to working with .Net which sometime around the 4.6 version release got enhancements to built-in language utilities. This led to better code-generators for numerous purposes (certain DI containers started to do dependency resolution at build time for example).
I did Scala for a time, which had a macro facility that was hot garbage and was rewritten between 2 and 3, so I never bothered to learn it. Around this time the orgs I worked for were placing an emphasis on OpenAPI / swagger specs for reasons I don't know because while there was tooling that could be used to generate both the entire http client and the set of interfaces used by the surface, we did neither (where I am at right now we still do neither form of code gen).
Anyways, things like code generation whether via external tooling or internal facilities is magical but it is deterministic magic: Identical input should yield the same result. It is also hard to use well. The ergonomics of the OpenAPI / Swagger codegen tooling is pretty bad though not impossible, and the whole thing under the hood is powered by mustache templates. The .Net stuff is still there and works well, but I don't think many work places want to invest in really understanding that tooling and how it can be employed. Lisp well always be Lisp, good job Lisp. There are other examples of code generation used for practical ends I am sure.
The point is that code generation requires being able to think and define certain forms of abstractions outside of the target functionality of a single program and while it's not hard to do that thinking, it's just high enough of a bar that your typical enterprise engineer won't engage with that (but will always be amazed by the results!).
AI Code Extruders change the cognitive burden that would be required for code generation into something that I guess appeals to engineers. You can specify something in the abstract and a Do-What-I-Mean machine may churn up something minimally useful, determinism be damned. Not only would an engineer not need to consider the abstraction layer between their input and the code but they would be unable to fully interrogate that abstraction because the code extruder does not need to show its work.
Just a thought. Probably a very silly thought.
I think you're actually right on the money here, nowhere near delusional, especially since you come from a Lisp background. I really appreciate Lisp (and Smalltalk) for the "live-coding" and universal inspectability/debuggability aspects in the tooling. I appreciate test-driven development as I've seen it presented in the Smalltalk context, as it essentially encourages you to "program in the debugger" and be aware of where the blank spots in your program specification are. (Although I'm aware that putting TDD into practice on an industrial scale is an entirely different proposition, especially for toolchains that aren't explicitly built around the concept.)
However, LLM coding assistants are, if not the exact opposite of this sort of tooling, something so far removed as to be in a different and more confusing realm. Since it's usually a cloud service, you have no access to begin debugging, and it's drawing from a black box of vector weights even if you do have access. If you manage to figure out how to poke at that, you're then faced with a non-trivial process of incremental training (further lossy compression) or possibly a rerun of the training process entirely. The lack of legibility and forthright adaptability is an inescapable consequence of the design decision that the computer is now a separate entity from the user, rather than a tool that the user is using.
I've posed the question in another slightly less skeptical forum, what advantage do we gain from now having two intermediate representations of a program: the original, fully-specified programming language, as well as the compiler IR/runtime bytecode? I have yet to receive a satisfactory answer.
@BurgersMcSlopshot @BlueMonday1984
I am cleaning up behind uncurious people that have made some vexing category errors.
I feel this, I was dealing with this at a prior employer.
I think there's definitely something to that. It seems like it rhymes with my own interpretation, at least. I did 7 years of support for backend network infrastructure (load balancing, SSL optimization, etc) and one thing that I consistently found was that the way the applications and tech services at most of these companies were structured everything was treated like a complete black box by everyone who wasn't specifically working on that element. Like, I would find myself trying to trace a problem through the application flow and every other request was essentially being handled by a completely different team and the people I was talking to didn't even understand the questions I was asking. That level of siloed work is somewhat necessary given the sheer complexity of the systems and infrastructure that modern applications rely on, but also seems to cultivate a certain level of incuriousity. What's happening inside those black boxes doesn't even get considered because it doesn't matter; it's somebody else's problem right up until it suddenly isn't. The current crop of confabulation machines take this tendency to a kind of logical extreme where nobody can adequately look into the black box to understand what it's doing, and that will similarly be perfectly fine up until it very much isn't and there won't be anyone to call to figure out how to fix it.