[-] [email protected] 2 points 4 months ago

That takes courage to say, after 90% of your comments have to do with (speculations on) me.

Anyway, good riddance.

[-] [email protected] 1 points 4 months ago

I can see a threat model already from 2014.

Anyway, I think it's a tradeoff that it's hard to assess quantitatively, as risk is always subjective. From where I stand, the average person using native clients and managing their own keys has a much higher chance to be compromised (by far simpler vectors), for example. On the other hand, someone using a clean OS, storing the key on a yubikey and manually vetting the client tool can resist to sophisticated attacks better compared to using web clients.

I just don't see this as hill to die on either way. In fact, I also argue in my blog post that for the most part, this technical difference doesn't impact the security sufficiently to make a difference for the average user.

I guess you disagree and that's fine.

[-] [email protected] 1 points 4 months ago

Well, yes-ish.

An organization with resources to coerce or compromise Proton or similar wouldn't have trouble identifying individual users "well enough" (trivially, IP address). At that point there is absolutely nothing stopping a package distributor to serve different content by IP. Not even signatures help in this context, as the signature still comes from the same party coerced or compromised.

Also most people won't (or are unable to) analyze every code change after every update, which means in practice detection is even more unlikely for OS packages than it is for web pages (much easier to debug code and see network flows). The OS attack surface is also much broader.

In general anyway, this is such a sophisticated attack (especially the targeted nature of it) that it's not relevant for the vast, vast majority of people. If you deal with super sensitive data you can build your proton client directly, or simply use the bridge (which ultimately is exactly like other client-side tooling), so for those very rare corner cases where this threat is relevant, a solution exists. Actually, in those cases you probably don't want to use mail in general. So my question is, who is the threat actor you are concerned about?

All in all I think that labeling "insecure" the setup for this I think is not accurate and can paint a wrong picture to people less technically competent.

[-] [email protected] 1 points 4 months ago

They wrote that they don't want to "write and forget" but engage with people (as they do on Reddit, for better or worse). I think it's opinable, but it sounds reasonable to me. What is the value of having an official account which just reposts one-way communication already published on the blog and on the newsletter? Anybody can build such a bot, but it's not "presence" the way I interpret it.

[-] [email protected] 1 points 1 year ago

Instant transactions are periodic, I don’t know any bank that runs them globally on one machine to compensate for time zones.

Ofc they don't run them on one machine. I know that UK banks have only DCs in UK. Also, the daily pattern is almost identical everyday. You spec to handle the peaks, and you are good. Even if you systems are at 20% half the day everyday, you are still saving tons of money.

Batches happen at a fixed time, are idle most of the day.

Between banks, from customer to bank they are not. Also now most circuits are going toward instant payments, so the payments are settled more frequently between banks.

My experience are banks (including UK) that are modernizing, and cloud for most apps brings brutal savings if done right, or moderate savings if getting better HA/RTO.

I want to see this happening. I work for one and I see how our company is literally bleeding from cloud costs.

But that should have been a lambda function that would cost 5 bucks a day tops

One of the most expensive product, for high loads at least. Plus you need to sign things with HSMs etc., and you want a secure environment, perhaps. So I would say...it depends.

Obviously I agree with you, you need to design rationally and not just make a dummy translation of the architecture, but you are paying for someone else to do the work + the service, cloud is going to help to delegate some responsibilities, but it can't be cheaper, especially in the long run since you are not capitalizing anything.

[-] [email protected] 1 points 1 year ago

Systems are always overspecced, obviously. Many companies in those industries are dynosaurs which run on very outdated systems (like banks) after all, and they all existed before Cloud was a thing.

I also can't talk for other industries, but I work in fintech and banks have a very predictable load, to the point that their numbers are almost fixed (and I am talking about UK big banks, not small ones).

I imagine retail and automotive are similar, they have so much data that their average load is almost 100% precise, which allows for good capacity planning, and their audience is so wide that it's very unlikely to have global spikes.

Industries that have variable load are those who do CPU intensive (or memory) tasks and have very variable customers: media (streaming), AI (training), etc.

I also worked in the gaming industry, and while there are huge peaks, the jobs are not so resource intensive to need anything else than a good capacity planning.

I assume however everybody has their own experiences, so I am not aiming to convince you or anything.

[-] [email protected] 1 points 1 year ago

I am specifically saying that redundancy doesn't solve everything magically. Redundancy means coordination, more things that can also fail. A redundant system needs more care, more maintenance, more skills, more cost. If a company decides to use something more sophisticated without the corresponding effort, it's making things worse. If a company with a 10 people department thinks that using Cloud it can have a resilient system like it could with 40 people building it, they are wrong, because they now have a system way more complex that they can handle, despite the fact that storage is replicated easily by clicking in the GUI.

[-] [email protected] 1 points 1 year ago

I wish it worked like that, but I donct think it does. Connecting clouds means introducing many complex problems. Data synchronization and avoiding split-brain scenarios, a network setup way more complex, stateful storage that needs to take into account all the quirks and peculiarities of all services across all clouds, service accounts and permissions that need to be granted and segregated for all of them, and way more. You may gain resilience in some areas, but you introduce a lot more things that can fail, be misconfigured or compromised.

Plus, a complex setup makes it harder by definition to identify SPOFs, especially considering it's very likely nobody in the workforce is going to be an expert in all the clouds in use.

To keep using your simile of the disks, a single disk with a backup might be a better solution for many people, considering you otherwise might need a RAID controller that can fail and all the knowledge to handle and manage a RAID array properly, in addition to paying 4 or 5 times the storage. Obviously this is just to make a point, I don't actually think that RAID 5 vs JBOD introduces comparable complexity compared to what multi-cloud architecture does to single-cloud.

[-] [email protected] 1 points 1 year ago

Complexity brings fragility. It's not about doing the job right, is that "right" means having to deal with a level of complexity, a so high number of moving parts and configuration options, that the bar is set very high.

Also, I would argue that a large number of organizations don't actually need the resilience that they pay a very high price for.

[-] [email protected] 1 points 1 year ago

Well, I did not mean replacement (in fact, most orgs run in clouds which uses VMs) but I meant that a lot of orgs moved from VMs as the way to slice their compute to containers/kubernetes. Often the technologies are combined, so you are right.

[-] [email protected] 1 points 1 year ago

I wouldn't say that namespaces are virtualization either. Container don't virtualize anything, namespaces are all inherited from the root namespaces and therefore completely visible from the host (with the right privileges). It's just a completely different technology.

[-] [email protected] 1 points 1 year ago

That's not an engine, it's a metaengine. The results are still tied to the engines used, which means if they are trash, you get trash. Kagi uses a mix of google/yandex/brave etc. and then elaborates them as well, in addition to have their own scraper for things like the small web (which is great to surface personal blogs).

They are not comparable. Also, kagi's privacy policy is exemplar and the account can be paid in crypto now (if you don't want to use CC).

Besides, there is no such thing as free hosting, similarly to Lemmy, it's just someone paying.

view more: ‹ prev next ›

loudwhisper

0 post score
0 comment score
joined 2 years ago