6
top 2 comments
sorted by: hot top new old
[-] plc@feddit.dk 1 points 1 week ago

Hmm isn't that a somewhat paradoxical take?

If the proposed solution is to pin your idea of what is safe on a rigorously formalised security policy, doesn't that entail that you know what you're doing (i.e. your problem domain is sufficiently narrow that you are capable of comprehending it fully) and isn't that exactly not the case with most/all(?) applications that benefit from AI?

Didn't read the complete article, so mea culpa, but some examples of systems where this is feasible would be welcome.

It certainly doesn't seem feasible for my goto example of software development using claude code.

[-] d.rizo@piefed.social 1 points 1 week ago* (last edited 1 week ago)

I only cautiously use AI. Most certainly because I see how, AI companies apply "flex tape" to fix issues. They themselves don't realy know how the AI will behave and are only treating symptoms and ignore the real problem.

That's why I think your stance on "we don't 'really' know what AI is doing" is right. "Knowing the problem good enough to formalize the risks/policies" is a very valid point, but on the other hand, I think you would agree that we already solved such issues. Think about a "black box".

If you see AI as a black box, you might come to a similar conclusion as the author.

I know that this is not the fix we need or even want. But better this than playing the beta tester for AI companies.

this post was submitted on 07 Feb 2026
6 points (100.0% liked)

Hacker News

4307 readers
347 users here now

Posts from the RSS Feed of HackerNews.

The feed sometimes contains ads and posts that have been removed by the mod team at HN.

Source of the RSS Bot

founded 1 year ago
MODERATORS