you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
this post was submitted on 05 Apr 2026
21 points (92.0% liked)
TechTakes
2533 readers
42 users here now
Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.
This is not debate club. Unless it’s amusing debate.
For actually-good tech, you want our NotAwfulTech community
founded 2 years ago
MODERATORS
Anthropic’s latest model that they haven’t released to the public yet since they’re worried its gonna fuck up cybersecurity this thread goes over it a bit
XCancel link for those of us sick of being badgered to sign up/in
On a more productive note, this feels likely to be tied in with the usual issues of AI sycophancy re: false positive rate. If you ask the model to tell you about security vulnerabilities, it's never going to tell you there aren't any, any more than existing scanners will. When I worked for F5 it was not uncommon to have to go down a list of vulnerabilities that someone's scanner turned out and figure out whether they were actually something that needed mitigation that could be applied on our box, something that needed to be configured somewhere else in the network (usually on their actual servers) or (most commonly) a false positive, e.g. "your software version would be vulnerable here, which is why it flagged, but you don't have the relevant module activated and if an attacker is able to modify your system to enable it you're already compromised to a far greater degree than this would allow." That was with existing tools that weren't trying to match a pattern and complete a prompt.* Given that we've seen the shitshow that is Claude Code I think it's pretty clear they're getting high on their own supply and this announcement ought be catnip for black hats.