Since I already started typing my hot take reaction I'll just post it as is, but seeing as Ed Zitron has addressed this I'd definitely read his take as it's surely better sourced and professionally written :)
I am no longer needed for the actual technical work of my job. I describe what I want built, in plain English, and it just... appears. Not a rough draft I need to fix. The finished thing. I tell the AI what I want, walk away from my computer for four hours, and come back to find the work done. Done well, done better than I would have done it myself, with no corrections needed. A couple of months ago, I was going back and forth with the AI, guiding it, making edits. Now I just describe the outcome and leave.
Let me give you an example so you can understand what this actually looks like in practice. I'll tell the AI: "I want to build this app. Here's what it should do, here's roughly what it should look like. Figure out the user flow, the design, all of it." And it does. It writes tens of thousands of lines of code. Then, and this is the part that would have been unthinkable a year ago, it opens the app itself. It clicks through the buttons. It tests the features. It uses the app the way a person would. If it doesn't like how something looks or feels, it goes back and changes it, on its own. It iterates, like a developer would, fixing and refining until it's satisfied. Only once it has decided the app meets its own standards does it come back to me and say: "It's ready for you to test." And when I test it, it's usually perfect.
I'm not exaggerating. That is what my Monday looked like this week.
I made it to about here and then decided I couldn't suspend disbelief enough to continue. The way this author describes it we're now at the point where you can just prompt, "I want a native Linux version of Adobe Photoshop/Solidworks/whatever-the-fuck-MS-Office-is-now-called," and it should be able to make that happen.
Now that I think more about it, why should anybody even care if AI can write code? You wouldn't tell it to make accounting software for your business, you'd tell it to do your business accounting and not care how it was doing so.
In 2022, AI couldn't do basic arithmetic reliably. It would confidently tell you that 7 × 8 = 54.
By 2023, it could pass the bar exam.
By 2024, it could write working software and explain graduate-level science.
By late 2025, some of the best engineers in the world said they had handed over most of their coding work to AI.
-
Prior to 2022 our calculators could reliably do arithmetic, but would also display an error when asked to do nonsensical operations. Are the amazing models the author describes now consistently able to resist efforts to force a response where a correct one is impossible? I suspect with the right prodding it'll still declare a winner in unstoppable vs. immovable etc.
-
So am I not supposed to assume that a model was trained on the bar exam and then asked to complete a test using the same material? I wouldn't be impressed if you told me a child passed the bar exam by using the answer key either.
-
I read this as "working software" = capable of "Hello World!", and "explain graduate-level science" = can quote relevant blocks of text sourced from wikis and scholarly docs. Could it attempt to plausibly explain anything novel that is science related, or only repeat already published and understood things?
-
If the best engineers in the world already handed over most of their coding work then why are any engineers still employed anywhere? How is just replacing customer service reps with AI agents working out so far?