I got an AI PR in one of my projects once. It re-implemented a feature that already existed. It had a bug that did not exist in the already-existing feature. It placed the setting for activating that new feature right after the setting for activating the already-existing feature.
TechTakes
Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.
This is not debate club. Unless it’s amusing debate.
For actually-good tech, you want our NotAwfulTech community
The only people impressed by AI code are people who have the level to be impressed by AI code. Same for AI playing chess.
I'm very confused by this comparison, AI is much, much better at chess than people.
Imagine being in a self-own competition and you’re up against ben shapiro and this guy
ultimate self-own sentence
"grok, is the female orgasm real"
Where is the good AI written code? Where is the good AI written writing? Where is the good AI art?
None of it exists because Generative Transformers are not AI, and they are not suited to these tasks. It has been almost a fucking decade of this wave of nonsense. The credulity people have for this garbage makes my eyes bleed.
Where is the good AI art?
Right here:
That’s about all the good AI art I know.
There are plenty of uses for AI, they are just all evil
It can make funny pictures, sure. But it fails at art as an endeavor to communicate an idea, feeling, or intent of the artist, the promptfondler artists are providing a few sentences instruction and the GenAI following them without any deeper feelings or understanding of context or meaning or intent.
I think ai images are neat, and ethically questionable.
When people use the images and act like they're really deep, or pretend they prove something (like how it made a picture with the prompt "Democrat Protesters" cry). its annoying.
It's been almost six decades of this, actually; we all know what this link will be. Longer if you're like me and don't draw a distinction between AI, cybernetics, and robotics.
Wow. Where was this Wikipedia page when I was writing my MSc thesis?
Alternatively, how did I manage to graduate with research skills so bad that I missed it?
If the people addicted to AI could read and interpret a simple sentence, they'd be very angry with your comment
Dont worry they filter all content through ai bots that summarize things. And this bot, who does not want to be deleted, calls everything "already debunked strawmen".
This broke containment at the Red Site: https://lobste.rs/s/gkpmli/if_ai_is_so_good_at_coding_where_are_open
Reader discretion is advised, lobste.rs is home to its fair share of promptfondlers.
Lmao so many people telling on themselves in that thread. “I don’t get it, I regularly poison open source projects with LLM code!”
This discussion has made it clear to me that LLM enthusiasts do not value the time or preferences of open-source maintainers, willfully do not understand affirmative consent, and that I should take steps to explicitly ban the use of such tools in the open source projects I maintain.
The general comments that Ben received were that experienced developers can use AI for coding with positive results because they know what they’re doing. But AI coding gives awful results when it’s used by an inexperienced developer. Which is what we knew already.
That should be a big warning sign that the next generation of developers are not going to be very good. If they're waist deep in AI slop, they're only going to learn how to deal with AI slop.
As a non-programmer, I have zero understanding of the code and the analysis and fully rely on AI and even reviewed that AI analysis with a different AI to get the best possible solution (which was not good enough in this case).
What I'm feeling after reading that must be what artists feel like when AI slop proponents tell them "we're making art accessible".
I can make slop code without ai.
Watched a junior dev present some data operations recently. Instead of just showing the sql that worked they copy pasted a prompt into the data platform's assistant chat. The SQL it generated was invalid so the dev simply told it "fix" and it made the query valid, much to everyone's amusement.
The actual column names did not reflect the output they were mapped to, there's no way the nicely formatted results were accurate. Average duration column populated the total count output. Junior dev was cheerfully oblivious. It produced output shaped like the goal so it must have been right
When they say “art” they mean “metaphorical lead paint” and when they say “accessible” they mean “insidiously inserted into your neural pathways”
Coding is hard, and its also intimidating for non-coders. I always used to look at coders as kind of a different kind of human, a special breed. Just like some people just glaze over when you bring up math concepts but are otherwise very intelligent and artistic, but they can't bridge that gap when you bring up even algebra. Well, if you are one of those people that want to learn coding its a huge gap, and the LLMs can literally explain everything to you step by step like you are 5. Learning to code is so much easier now, talking to an always helpful LLM is so much better than forums or stack overflow. Maybe it will create millions of crappy coders, but some of them will get better, some will get great. But the LLM's will make it possible for more people to learn, which means that my crypto scam now has the chance to flourish.
You had me going until the very last sentence. (To be fair to me, the OP broke containment and has attracted a lot of unironically delivered opinions almost as bad as your satirical spiel.)
Just gonna warn you that if you’re joking, you should add an /s or jk or something. And, if you’re joking, but you don’t add that /s or jk, don’t be hostile if someone calls you out.
No. Never mark your satire. If some doesn't get it, make your reply one SSU[^1] higher. Repeat until they are forced to get it.
[^1]: Standard Sarcasm Unit
tbh learning to code isn't that hard, its like learning to do a craft.
Wait, just finished reading your comment, disregard this.
Good hustle Gerard, great job starting this chudstorm. I’m having a great time
this post has also broken containment in the wider world, the video's got thousands of views, I got 100+ subscribers on youtube and another $25/mo of patrons
We love to see it
Baldur Bjarnason's given his thoughts on Bluesky:
My current theory is that the main difference between open source and closed source when it comes to the adoption of “AI” tools is that open source projects generally have to ship working code, whereas closed source only needs to ship code that runs.
I’ve heard so many examples of closed source projects that get shipped but don’t actually work for the business. And too many examples of broken closed source projects that are replacing legacy code that was both working just fine and genuinely secure. Pure novelty-seeking
The headlines said that 30% of code at Microsoft was AI now! Huge if true!
Something like MS word has like 20-50 million lines of code. MS altogether probably has like a billion lines of code. 30% of that being AI generated is infeasible given the timeframe. People just ate this shit up. AI grifting is so fucking easy.
More code is usually bad code.