...and I still don't get it. I paid for a month of Pro to try it out, and it is consistently and confidently producing subtly broken junk. I had tried doing this before in the past, but gave up because it didn't work well. I thought that maybe this time it would be far along enough to be useful.
The task was relatively simple, and it involved doing some 3d math. The solutions it generated were almost write every time, but critically broken in subtle ways, and any attempt to fix the problems would either introduce new bugs, or regress with old bugs.
I spent nearly the whole day yesterday going back and forth with it, and felt like I was in a mental fog. It wasn't until I had a full night's sleep and reviewed the chat log this morning until I realized how much I was going in circles. I tried prompting a bit more today, but stopped when it kept doing the same crap.
The worst part of this is that, through out all of this, Claude was confidently responding. When I said there was a bug, it would "fix" the bug, and provide a confident explanation of what was wrong... Except it was clearly bullshit because it didn't work.
I still want to keep an open mind. Is anyone having success with these tools? Is there a special way to prompt it? Would I get better results during certain hours of the day?
For reference, I used Opus 4.6 Extended.
If you're stuck at review you aren't seeing 10x development, you're seeing 10x code generation.
This is especially important because without the review/test/deploy part of the pipeline you aren't actually seeing any progress towards business goals.
Once you do get these parts sorted, you can then look at what multiplier you're seeing.
That's not to say there isn't an improvement in your workflow, just that you can't say with any certainty what kind of improvement without measuring the end to end.
It might turn out that the rest of the pipeline is way easier , in which case your multiplier will be higher, it might also be much harder, in which case the multiplier will be lower.
I'm not taking shots, i mean it seriously, especially if you need to report any of this to the rest of the business.
edit : In addition, if it turns out that review is going to be a bottleneck you can get extra resource pointed in that direction which will benefit the workflow overall.
another edit: i would consider correctly managing the expectations of those you report to as a vital skill.
Exactly this. My experience with our companies wrapper on Claude lines up with OP, not this comment thread.
Everyone seems to forget everything you write is a liability. You can't have bugs in code that is never written or generated, comments that don't exist never become inaccurate, not duplicating "knowledge" into a repo doesnt have a risk of not aligning with business goals long term as they change.
From what I've seen, people claiming a "10x increase" did not have a strong foundation to begin with and/or did not utilize tools like IDEs effectively. No offense to thread OP, which seems itself a generated response, but in the time he has done all of that a strong engineer would be long done. Everything listed should be done before ever getting into code along with business and product partners.
Ehh, it really depends on where the risk is and the problem is LLMs can't evaluate for that unless you feed it everything. Some projects need code experiments before you settle on an architecture, but that's only if you're a pioneer (which frankly is where the money is at).
that's a very good distinction, absolutely. its just code generation at this stage.
the review was the bottleneck before (as I believe was already the case for many companies) but now with 10x the code generated for review, the bottleneck has turned into a dripping faucet.