I think the main difference here is that breaking RSA now just requires scaling up existing approaches, while breaking LWE or anything like that would need a major conceptual breakthrough. The former possibility is much more likely, and in any case, cryptographers are the most paranoid people on the planet for a reason.
Unfortunately, one can never be sure about much in cryptography until P vs NP is solved (and then some).
(Of course, just because some people say that scaling up is enough doesn't mean it's actually true. For breaking RSA, we know have Shor's algorithm, while the only evidence AI bros have from superintelligence coming from scaling is "trust me bro".)
I've seen this story play out in software engineering: people were very impressed when the AI does unexpectedly well in one out of 50 attempts on an easy task, and so people decided to trust it for everything and turn their codebases into disasters. There was no great wave of new high-quality software. Instead, the only real result was that existing software has become far more buggy and insecure.
Now we have people using AI in science and math because it was impressive in random demonstrations of solving math problems. I now have friends asking me why I'm not using AI, and also saying that AI will be better than all mathematicians in 30 years or whatever. Do you really think I refuse to use AI out of ignorance? No, I know too much about it! I have seen the same story play out in software engineering, and what makes this any different?