Is there? Shocker.
Capitalism does not work because companies will always seek to grow more and more and more. It's the core of capitalism. You need anti-capitalist policies to keep companies small.
You missed the point. The point is that almost all software today follows the same general ideas, patterns, etc.
The quality of the output of AI is not tied to what these patterns are used towards. Even if, say, your tool has a completely new network protocol. An LLM will still "understand" that it is a network protocol, that it serializes following rules that you tell it, serializes and deserializes the way you decide, then it will write that down in a memory and be able to work with that.
A new file format? Same. A very specialized new kind of No-SQL database that fits your very specific tool better? It will also write down in a file how it works and be able to use that.
It's as good as the documentation you give it is. Which, for basic things such as setting up a basic REST API, it has learned in its training data. If it hasn't, it's up to you to provide it, and it will be perfectly able to use it.
Even if you build some weird unique assembly language it will be able to use it if you give it the set of instructions and their documentation.
There are definitely real engineers being strongly anti-AI. The problem, in my opinion, is that they just didn't really try working with them.
They're incredibly powerful tools, and they don't only amplify bad developers, they amplify every developer that really tries to work with it.
The mistake people make is delegating the decision making to the AI. Let the tool be a tool, not a brain. You architect, you design, you order, it writes the code. You review the code. There you go, you have a pretty good quality code, better than most devs will produce, following your design and architecture, you controlled the entire decision making, and you did it in 5x less time.
I also think that it has become too useful to disappear in engineering.
I don't think history will remember that he did not do something, especially something that he had no power to do. He'll be remembered for what he did.
Because the whole party wants it.
So you mean, if this use of 'literally' had been around for, say, several centuries, you'd consider it acceptable?
Well it's a good thing that the post does not say "bad at reading" then, isn't it
And then you become even more identifiable cause you're part of the 10 madmen in Google's database who do it
The point is not that they know your IP, but that even your IP already gives away information. That's why they start with the information, rather than the IP being the source.
This is not intended to be for people who understand how this works.
And as someone else said, probably vibe coded.
iglou
0 post score0 comment score
Again an article that draws the wrong conclusions.
No, it does not make people stupider. It makes people lazier. Just. Like. All. Tech.
How many of us do research in libraries rather than on the internet these days? Back when internet became popular there were similar criticisms to what we have today on AI.
Essays are AI generated, show poor critical thinking, and you can tell? Great, grade it like what it is. A piss poor work. Just like someone who would copy a wikipedia article 15 years ago would be graded like shit, perhaps even considered cheating and given a 0 (or F or whatever is the worst grade in your system)
If you can't tell, then the tool was properly used. If you can't tell the difference between an AI generated essay and a human-made essay, then perhaps essays are no longer good tests of someone's abilities.
Rather than pushing back against a tech that is probably never going away, even when the bubble pops, how about we start thinking productively and adapt how we learn, evaluate, and work instead?