433
Claude AI down: Anthropic users hit with errors as chatbot goes offline
(www.the-independent.com)
This is a most excellent place for technology news and articles.
It's all an illusion. You don't need Claude to create, the ability has always been in you
the real Claude was the friends we made along the way
The friends we made was also Claude, though.
No it’s ok I blocked the Claude user from my repos
What if Claude was one's only friend, though?
Asking for a ~~Claude~~ friend.
I'm pretty sure the real Claude is up there outside on the sky though
Is Claude in this chat?
You also don't need higher level programming languages. The ability to code assembly has always been inside you.
Claude doesn't have the ability to create images, it's mainly used for work
Sometimes work requires images. Claude is pretty awesome at making .svg files illustrating - pretty much whatever you can describe.
I don’t need Claude to create, the ability has always been in me - but it comes out much more slowly without tools that assist me, whether that's books with example code, websites that document APIs, community sites that discuss problems and solutions, web searches that bring me reference material related to what I'm doing, or AI agents which propose formal requirements and code that implements those requirements complete with tests.
It's all my "creativity" - but a lot of professional programming more resembles painting a house than a still-life canvas. Painting a house using tiny art brushes is possible, but it takes a lot longer than using a spray-gun.
In all seriousness using AI for codegen is at best shortsighted negligence. You know that problem huge long running software projects have where it becomes a nightmare to change anything? That's some proportion of poor architectural design, lack of cleanup or refactor time, and poor understanding of the code by developers. Poor architectutal design can be repaired by cleanup and refactoring, so both of those issues end up being management/planning failures more than anything. Not understanding the codebase is much more complex. It can be caused by attrition causing loss of institutional knowledge, the code base growing faster than anyone can keep track of, the team being so large no one can stay on top of things, too much time passing since anyone has looked at or changed parts, lots of reasons. The only solution is doing a long audit and associated cleanup and refactoring. If you don't it just takes forever to change anything because of all the knock on effects that no one can predict, meaning delays and bugs. When you use AI tools the code base grows very quickly, too quick to really comprehend, and you get shitty architecture to go along with it. You're just speedrunning enterprise software or spending all your time reviewing slop code. It's like a drug, the first time it does something fast and well you feel it's so great, but it will never live up to that because it secretly sucks and can only ever suck. Best case it slows you down and you get good software at the end. Worst case you spend all your time wrestling with it and never get a finished product.
You know what AI agents can help accomplish faster, with fewer human resources, than previous tools?
cleanup: Review this code for technical debt, report. Plan and implement fixes to address (selected portions of reported) tech debt.
refactor: Review this code for DRY and SSOT opportunities. Plan and implement...
Architectural Design - yeah, I'm not on a good footing with how to leverage the current tools for good architectural design. They are good, however, at tech stack selection - comparisons of various options, including architectural options. They're not always great at following architectural designs when the system gets too complex to keep the whole architecture in context while designing. Much like human designed systems, they work better if you can modularize and keep each module a manageable size, building tree-style to form the larger system.
poor understanding of the code by developers. Yeah, any code not written by me is hard to understand, and any code written by me is hard for others to understand. "Me" being the vast majority of developers I have ever worked with. At least agents will comment their code and write somewhat comprehensive documentation when you ask them to.
management/planning failures more than anything. - the strongest tool I have found for AI development is to have the agents make plans. Review those plans, or not, but have them make a plan then have them implement the plan then have them review the implementation against the plan and point out discrepancies / shortcomings. The worst behavior AI agents had (a few months ago, they're getting better) was to do some fraction of what you tell them to, then say - effectively "ALL DONE BOSS! What's next?" What's next is to go back to the written plan and make sure it's complete. I think, again, they lose sight of the plan as their context window overflows, so you have to keep reminding them to re-read it. Management.
the team being so large no one can stay on top of things, this is very familiar turf when dealing with limited context windows in AI agents.
too much time passing since anyone has looked at or changed parts, this is something AI agents don't suffer from - they have "the eternal sunshine of the spotless mind" you are introducing them to the project fresh with every new context window. Hopefully you are simulataneously developing a tree-form documentation set with which they can easily navigate to the parts of the project they need to focus on and get "up to speed" for the new tasks at hand (which should include: maintenance of the documentation.)
When you use AI tools the code base grows very quickly, only if you let it.
too quick to really comprehend, thus: the documentation - which AI agents aren't too bad at writing.
you get shitty architecture to go along with it, only when you allow it.
I've seen a lot of "10x PRODUCTIVITY!!!" claims, and when you move at those speeds you're going to encounter exactly the problems you describe. If you move more deliberately, as if you are managing a revolving door team of consultants, have the discipline to manage the architecture design and documentation, the implementation documentation, the unit and integration tests, etc. some may argue that it's easier to do it by hand - in some cases it may be - but I feel like we're at a point where you might expect more like a 3x productivity boost using AI agents vs not using AI agents with the bonus that: when you use AI agents you get the artifacts of disciplined development that you're going to hear your human team bitch and moan about how "doing all that" (unit tests, docs) is slowing us down by 50-80%!!! so the humans tend to skimp in those areas whereas AI doesn't complain at all when you task it with the 14th round of unit test coverage evaluation, refinement and expansion.
When's the last time you used an AI agent to write a significant chunk of code? https://www.theregister.com/2026/03/26/greg_kroahhartman_ai_kernel/
It’s like a drug, the first time it does something fast and well you feel it’s so great, and that's a problem... if you're going to party with cocaine you're going to need some serious discipline to hold down a day job at the same time.
and can only ever suck. The world changes. The world of AI code development has changed significantly over the past year. A year ago I called it "cute, interesting potential, practically useless." 6 months ago the improvements were so dramatic I decided I needed to get a handle on it - yeah, it was limited in complexity capability and did make a lot of slop, but it was so far ahead of where it was 6 months prior... Today, it's not perfect, but it's a lot better than it was 6 months ago, and while you can make a lot of slop with it, you also can keep a leash on it and clean up the slop while still making super-human forward progress.
Worst case you spend all your time wrestling with it and never get a finished product. - just like working with human teams.
you are absolutely right. there is value to these in software engineering and the people who don't realize that and learn how and when to apply them will be left behind
The bottom line for me is: it finds issues. More issues than typical human code reviews find. Like human code reviews, some of the issues it finds are trivial, unimportant, debatable whether "fixing" them is actually improving the product overall. Also like human code reviews sometimes it finds things that look like issues that really aren't when you dig into the total picture. Then, some of the issues it finds are real, some are subtle like actual memory leaks, unsanitized inputs, etc. and if you're going to ignore those, you're just making worse software than is possible with the current tools.
Also, unlike most human code reviews, when it finds an issue it can and will do a thorough writeup explaining why it believes it is an issue, code snippets in the writeup, links into the source, proposed fixes, etc. All that detail is way too much effort to be a productive use of a human reviewer's time, but it genuinely helps in the evaluation of the issue and the proposed fix(es).
Just like human code reviews, if you just accept and implement every thing it says without thinking, you're an idiot.
only an idiot would use ai for code cleanup or review. thats just asking for bugs.
You mistyped illusion right?
I blame my current machine for this...