[-] [email protected] 1 points 4 months ago

I can see a threat model already from 2014.

Anyway, I think it's a tradeoff that it's hard to assess quantitatively, as risk is always subjective. From where I stand, the average person using native clients and managing their own keys has a much higher chance to be compromised (by far simpler vectors), for example. On the other hand, someone using a clean OS, storing the key on a yubikey and manually vetting the client tool can resist to sophisticated attacks better compared to using web clients.

I just don't see this as hill to die on either way. In fact, I also argue in my blog post that for the most part, this technical difference doesn't impact the security sufficiently to make a difference for the average user.

I guess you disagree and that's fine.

[-] [email protected] 1 points 4 months ago

Well, yes-ish.

An organization with resources to coerce or compromise Proton or similar wouldn't have trouble identifying individual users "well enough" (trivially, IP address). At that point there is absolutely nothing stopping a package distributor to serve different content by IP. Not even signatures help in this context, as the signature still comes from the same party coerced or compromised.

Also most people won't (or are unable to) analyze every code change after every update, which means in practice detection is even more unlikely for OS packages than it is for web pages (much easier to debug code and see network flows). The OS attack surface is also much broader.

In general anyway, this is such a sophisticated attack (especially the targeted nature of it) that it's not relevant for the vast, vast majority of people. If you deal with super sensitive data you can build your proton client directly, or simply use the bridge (which ultimately is exactly like other client-side tooling), so for those very rare corner cases where this threat is relevant, a solution exists. Actually, in those cases you probably don't want to use mail in general. So my question is, who is the threat actor you are concerned about?

All in all I think that labeling "insecure" the setup for this I think is not accurate and can paint a wrong picture to people less technically competent.

[-] [email protected] 1 points 4 months ago

They wrote that they don't want to "write and forget" but engage with people (as they do on Reddit, for better or worse). I think it's opinable, but it sounds reasonable to me. What is the value of having an official account which just reposts one-way communication already published on the blog and on the newsletter? Anybody can build such a bot, but it's not "presence" the way I interpret it.

[-] [email protected] 1 points 10 months ago

Django Unchained

Isn't it ironic that a movie with so many uses of that word helped you understand that word better?

To me it seems a very good reason to believe that people shouldn't be afraid of the syntax of the word, but definitely oppose the use when the semantic is the despicable one.

[-] [email protected] 1 points 11 months ago

Comfort is the main reason, I suppose. If I mess up Wireguard config, even to debug the tunnel I need to go to the KVM console. It also means that if I go to a different place and I have to SSH into the box I can't plug my Yubikey and SSH from there. It's a rare occurrence, but still...

Ultimately I do understand both point of view. The thing is, SSH bots pose no threats after the bare minimum hardening for SSH has been done. The resource consumption is negligible, so it has no real impact.

To me the tradeoff is slight inconvenience vs slightly bigger attack surface (in case of CVEs). Ultimately everyone can decide which compromise is acceptable for them, but I would say that the choice is not really a big one.

[-] [email protected] 1 points 1 year ago

I wish it worked like that, but I donct think it does. Connecting clouds means introducing many complex problems. Data synchronization and avoiding split-brain scenarios, a network setup way more complex, stateful storage that needs to take into account all the quirks and peculiarities of all services across all clouds, service accounts and permissions that need to be granted and segregated for all of them, and way more. You may gain resilience in some areas, but you introduce a lot more things that can fail, be misconfigured or compromised.

Plus, a complex setup makes it harder by definition to identify SPOFs, especially considering it's very likely nobody in the workforce is going to be an expert in all the clouds in use.

To keep using your simile of the disks, a single disk with a backup might be a better solution for many people, considering you otherwise might need a RAID controller that can fail and all the knowledge to handle and manage a RAID array properly, in addition to paying 4 or 5 times the storage. Obviously this is just to make a point, I don't actually think that RAID 5 vs JBOD introduces comparable complexity compared to what multi-cloud architecture does to single-cloud.

[-] [email protected] 1 points 1 year ago

Complexity brings fragility. It's not about doing the job right, is that "right" means having to deal with a level of complexity, a so high number of moving parts and configuration options, that the bar is set very high.

Also, I would argue that a large number of organizations don't actually need the resilience that they pay a very high price for.

[-] [email protected] 1 points 1 year ago

Yeah in general you can't mess the building blocks from the PoV of availability or internal design. That is true, since you are outsourcing it. You can still mess them up from other points of view (think about how many companies got breached due to misconfigured S3 buckets).

view more: ‹ prev next ›

loudwhisper

0 post score
0 comment score
joined 2 years ago