[-] [email protected] 1 points 10 months ago

Django Unchained

Isn't it ironic that a movie with so many uses of that word helped you understand that word better?

To me it seems a very good reason to believe that people shouldn't be afraid of the syntax of the word, but definitely oppose the use when the semantic is the despicable one.

[-] [email protected] 1 points 11 months ago

Comfort is the main reason, I suppose. If I mess up Wireguard config, even to debug the tunnel I need to go to the KVM console. It also means that if I go to a different place and I have to SSH into the box I can't plug my Yubikey and SSH from there. It's a rare occurrence, but still...

Ultimately I do understand both point of view. The thing is, SSH bots pose no threats after the bare minimum hardening for SSH has been done. The resource consumption is negligible, so it has no real impact.

To me the tradeoff is slight inconvenience vs slightly bigger attack surface (in case of CVEs). Ultimately everyone can decide which compromise is acceptable for them, but I would say that the choice is not really a big one.

[-] [email protected] 1 points 11 months ago

Yeah I know (I mentioned it myself in the post), but realistically there is no much you can do besides upgrading. Unattended upgrades kick in once a day and you will install the security patches ASAP. There are also virtual patches (crowdsec has a virtual patch for that CVE), but they might not be very effective.

I argue that VPN software is a smaller attack surface, but the problem still exists (CVEs) for everything you expose.

[-] [email protected] 1 points 11 months ago

Also hypervisors get escape vulnerabilities every now and then. I would say that in a realistic scale of difficulty of escape, a good container (doesn't matter if using Docker or something else) is a good security boundary.

If this is not the case, I wonder what your scale extremes are.

A good container has very little attack surface, since it can have almost no code or tools available, a read-only fs, no user privileges or capabilities whatsoever and possibly even a syscall filter. Sure, the kernel is the same but then the only alternative is to split that per application VMs-like) and you move the problem to hypervisors.

In the context of this asked question, I think the gains from reducing the attack surface are completely outweighed from the loss in functionality and waste of resources.

[-] [email protected] 1 points 11 months ago

Completely agree, which is why I do the same.

Additional bonus: proxies that interact with the docker API directly (I think also caddy can do it) save you from exposing the services on any port at all (only in the docker network). So it's way less likely to expose a port with a service by mistake and no need for arbitrary and unique localhost ports.

[-] [email protected] 1 points 11 months ago

Yep I agree. Especially looking at all the usernames that are tried. I do the same and the only risk come from SSH vulnerabilities. Since nobody would burn a 0-day for SSH (priceless) on my server, unattended upgrades solve this problem too for the most part.

[-] [email protected] 1 points 1 year ago

Proton runs fully on their own hardware, they have some positions open!

[-] [email protected] 1 points 1 year ago

No, it's not true. A single system has less failure scenarios, because it doesn't depend on external controllers or anything that makes the system distributed and that can fail causing a failure to your system (which may or may not be tolerated).

This is especially true from a security standpoint: complexity adds attack surface.

Simple example: a kubernetes cluster has more failure scenarios than a single node. With the node you have hardware failure, misconfiguration of the node, network failure. With a kubernetes cluster you have all that for each node (each with marginally less impact, potentially, because it depends for example on stateful storage, that if you mitigate you are introducing other failure scenarios as well), plus the fact that if the control plane goes in flames your node is useless, if the etcd data corrupts your node is useless, anything that happens with resources (a bug, a misuse of the API, etc.) can break your product. You have more failure scenarios because your product to run is dependent on more components to work at the same time. This is what it means that complexity brings fragility. Looking from the security side: an instance can be accessed only from SSH, if you are worried about compromise you have essentially one service to secure. Once you run on kubernetes you have the CI/CD system, the kubernetes API, the kubernetes supply-chain, etcd, and if you are in cloud you have plenty of cloud permissions that can indirectly grant you access to the control plane and to a console. Now you need to secure 5-6-7 entrypoints to a node.

Mind you, I am not advocating against the use of complex systems, sometimes they are necessary, but if the complexity is not fully managed and addressed, you have a more fragile system. Essentially complexity is a necessary evil to respond to some other necessities.

This is the reason why nobody would recommend to someone who needs to run a single static website to run it on Kubernetes, for example.

You say "a well designed system", but designing well is harder the more complexity exists, obviously. Redundancy doesn't always work, because redundancy needs coordination, needs processes that also depend on external components.

In any case, I agree that you can build a robust system within Cloud! The argument I am trying to make is that:

  • you need to be aware that you are introducing complexity that needs attention and careful design if you don't want it to result in more fragility and exposure
  • you need to spend way more money
  • you need to balance the cost with the actual benefits you are gaining

And mind you, everything you can do in Cloud you can also do on your own, if you invest on it.

[-] [email protected] 1 points 1 year ago

Of course the problem is solved, but that doesn't mean that the solution is easy. Also, distributed protocols still need to work on top of a complicated network and with real-life constraints in terms of performances (to list a few). A bug, misconfiguration, oversight and you have a problem.

Just to make an example, I remember a Kafka cluster with 5 replicas completely shitting its pants for 6h to rebalance data during a planned maintenance where one node was brought offline. It caused one of the longest outages to date with the websites which relied on it offline. Was it our fault? Was it a misconfiguration? A bug? It doesn't matter, it's a complex system which was implemented and probably something was missed.

Technology is implemented by people, complexity increased the chances of mistakes, not sure this can be argued.

Making it harder to identify SPOF means you might miss your SPOF, and that means having liabilities, and having anyway scenarios where your system can crash, in addition for paying quite a lot to build a resilience that you don't achieve.

A single instance with 2 failure scenarios (disk failure and network failure) - to make an example - is not more fragile than a distributed system with 20 failure scenarios. Failure scenarios and SPOF can have compensating controls and be mitigated successfully. A complex system where these can't be fully identified can't have compensating control and residual risk might be much harder. So yes, a single disk can fail more likely than 3 disks at once, but this doesn't give the whole picture.

[-] [email protected] 1 points 1 year ago

Yeah in general you can't mess the building blocks from the PoV of availability or internal design. That is true, since you are outsourcing it. You can still mess them up from other points of view (think about how many companies got breached due to misconfigured S3 buckets).

[-] [email protected] 1 points 2 years ago

Kagi is an engine, searx is a meta-engine. That's what I meant. Which means kagi does not simply collate results from multiple source (like searx does), but implements its own logic. This means that - for example - it deranks website with many trackers, or can implement various features on top of the results. So it's not a nitpick, it's a substantial difference between an engine (kagi) and a metaengine (searx), which is essentially a proxy + aggregation of other engines.

It's a known fact that brave optimizes result based on google data, and the kagi guys themselves in fact added that - with it being cheaper than google API - it could be a vector to eventually reduce cost for google API without impacting results.

That said, AFAIK kagi does not pose as a nonprofit, I think they make extremely clear that running searches (scraping, paying API, etc.) cost money and that they need to be profitable. Their stance is that by using a subscription model, their business interests align with user's interests of providing good searches, rather than results that benefit advertisers, which is completely reasonable. This is literally written in their "why pay for searches" article that is presented when they show the pricing.

Of course it is a big difference, and you can argue for pros and cons of both options. I personally think the internet should not be based either on megacorp nor on free labor. Would I prefer kagi being a co-op? Sure. But it's not like relying solely on free labor is free from any moral implication either (sure, you can donate, and I do to Lemmy for example, but only a minority does).

[-] [email protected] 1 points 2 years ago

Did you mean western?

view more: ‹ prev next ›

loudwhisper

0 post score
0 comment score
joined 2 years ago