Hmm isn't that a somewhat paradoxical take?
If the proposed solution is to pin your idea of what is safe on a rigorously formalised security policy, doesn't that entail that you know what you're doing (i.e. your problem domain is sufficiently narrow that you are capable of comprehending it fully) and isn't that exactly not the case with most/all(?) applications that benefit from AI?
Didn't read the complete article, so mea culpa, but some examples of systems where this is feasible would be welcome.
It certainly doesn't seem feasible for my goto example of software development using claude code.