this post was submitted on 26 Aug 2023
178 points (91.2% liked)

Selfhosted

40134 readers
544 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 1 year ago
MODERATORS
 

Many of the posts I read here are about Docker. Is anybody using Kubernetes to manage their self hosted stuff? For those who've tried it and went back to Docker, why?

I'm doing my 3rd rebuild of a K8s cluster after learning things that I've done wrong and wanted to start fresh, but when enhancing my Docker setup and deciding between K8s and Docker Swarm, I decided on K8s for the learning opportunities and how it could help me at work.

What's your story?

top 50 comments
sorted by: hot top controversial new old
[–] [email protected] 114 points 1 year ago (4 children)

Kubernetes is useful if you have gone full cattle over pets. And that is very uncommon in home setups. If you only own one or two small machines you cannot destroy infra easily in a "cattle" way, and the bloatware that comes with Kubernetes doesn't help you neither.

In homelabs and home servers the pros of Kubernetes are not very useful: high availability, auto-scaling, gitops integrations, etc: Why would you need autoscaling and HA for a SFTP used only by you? Instead you write a docker-compose.yml and call it a day.

[–] [email protected] 45 points 1 year ago

The one exception to this is if you're using your homelab to learn kubernetes.

That was the only time I used K8s and k3s on my homelab.

And for anything that I do want to set up in a HA/cattle kind of way, I use Docker Swarm, as it feels like a more comfortable extension of docker compose.

[–] [email protected] 11 points 1 year ago

This right here

[–] [email protected] 5 points 1 year ago (1 children)

This mostly, I haven't seen a compelling reason to leave my docker setup.

[–] [email protected] 8 points 1 year ago (1 children)

I think the biggest reasons for me have been growth and professional development. I started my home cluster 8 years ago as a single node of basically just running the hack/ scripts on my Linux desktop. I've been able to grow that same cluster to 6 hosts as I've replaced desktops and as I got a bit into the used enterprise server scene. I've replaced multiple routers and moved behind cloudflare, added a private CA a few times, added solid persistence with rook+ceph, and built my ideal telemetry stack, added velero backups into Backblaze b2, and probably a lot more I'm not thinking of.

That whole time, I've had to do almost zero maintenance or upgrades on the side projects I've built over the years, or on the self hosted services I've run. If you ignore the day or so a year I've spent cursing my propensity to upgrade a tad too early and hit snags, though I've just about always been able to resolve them pretty quickly and have learned even more from those times.

And on top of that, I get to take a lot of that expertise to work where it happens to pay quite well. And I've spent some time working towards building the knowledge into a side gig. Maybe someday that'll pay the bills too.

[–] [email protected] 4 points 1 year ago

One line from your comment struck a chord. The part about maintenance and upgrades. I feel like I get stuff set up and working and go about my life and then a failure happens at the most inopportune moment. Mostly, the failures are when I have a few hours free and decide to upgrade the OS and everything breaks and all the dependencies fall apart and some feature is no longer supported. That's where I started looking to K8s to just roll back until I have time to manage it.

[–] [email protected] 4 points 1 year ago

While you're probably right overall, there are many good reasons to use k8s. The api provides all sorts of benefits. Kubectl, k9s, and other operational UIs . Good deployment models and tools like argo. Loads of helm charts that are (theoretically) ready to use.

No, those things aren't free. There's a lot of overhead to running k8s.

[–] [email protected] 28 points 1 year ago (5 children)

I run k3s and all my stuff runs in it no need to deal with docker anymore.

[–] [email protected] 4 points 1 year ago (3 children)

I'm not very familiar with kubernetes or k3s but I thought it was a way to manage docker containers. Is that not the case? I'm considering deploying a k3s cluster in my proxmox environment to test it out.

[–] [email protected] 5 points 1 year ago

You can use kubernetes on any OCI container deployment.

So if you don't want/need to install the docker program, you can go with containerd.

[–] [email protected] 4 points 1 year ago

Kubernetes is abbreviated K8s (because there's 8 letters between the "k" and the "s". K3s is a "lite" version. Generally speaking, kubernetes manages your containers. You basicaly tell K8s what the state should be and it does what it needs to do to get the environment as you've declared. It'll check and start or restart services, start containers on a node that can run them (like ensuring enough RAM is available). There's a lot more, but that's the general idea.

load more comments (4 replies)
[–] [email protected] 22 points 1 year ago (1 children)

I manage like 200 servers in Google cloud k8s but I don't think I'd do that for home use. The core purpose is to manage multiple servers and assign processes between them, auto scaling, cluster internal network - running docker containers for single instance apps for personal use doesn't require this kind of complexity

My NAS software has a docker thing just built into it. I can upload or specify a package and it just runs it on the local hardware. If you have a Linux shell, I guess all you really have to do is run dockerd to start the daemon, make sure your network config allows connections, and upload your docker containers to it for running

[–] [email protected] 3 points 1 year ago

My thinking is the same, I see lots of k8s mentions on here and from coworkers at home and all I use is docker and VMs because I don't want all that complexity I have to deal with at work.

[–] [email protected] 19 points 1 year ago

Kubernetes is great if you run lots of services and/or already use kubernetes at work. I use it all the time and I've learned a lot on my personal cluster that I've taken to work to improve their systems. If you're used to managing infra already then it's not that much more work, and it's great to be able to shutdown a server for maintenance and not have to worry about more than a brief blip on your home services.

[–] [email protected] 15 points 1 year ago* (last edited 1 year ago) (1 children)

I use k8s at work and have built a k8s cluster in my homelab... but I did not like it. I tore it down, and currently using podman, and don't think I would go back to k8s (though I would definitely use docker as an alternative to podman and would probably even recommend it over podman for beginners even though I've settled on podman for myself).

  1. K8s itself is quite resource-consuming, especially on ram. My homelab is built on old/junk hardware from retired workstations. I don't want the kubelet itself sucking up half my ram. Things like k3s help with this considerably, but that's not quite precisely k8s either. If I'm going to start trimming off the parts of k8s I don't need, I end up going all the way to single-node podman/docker... not the halfway point that is k3s.
  2. If you don't use hostNetworking, the k8s model of traffic routes only with the cluster except for egress is all pure overhead. It's totally necessary with you have a thousand engineers slinging services around your cluster, but there's no benefit to this level fo rigor in service management in a homelab. Here again, the networking in podman/docker is more straightforward and maps better to the stuff I want to do in my homelab.
  3. Podman accepts a subset of k8s resource-yaml as a docker-compose-like config interface. This lets me use my familiarity with k8s configs iny podman setup.

Overall, the simplicity and lightweight resource consumption of podman/docker are are what I value at home. The extra layers of abstraction and constraints k8s employs are valuable at work, where we have a lot of machines and alot of people that must coordinate effectively... but I don't have those problems at home and the overhead (compute overhead, conceptual overhead, and config-overhesd) of k8s' solutions to them is annoying there.

load more comments (1 replies)
[–] [email protected] 13 points 1 year ago (3 children)

Seems a bit overkill for a personal use selfhosting set-up.

Personally, I don't need anything that requires multiple replicas and loadbalencers.

Do people who have homelabs actually need them? Or is it just for learning?

[–] [email protected] 6 points 1 year ago (1 children)

I find mine useful as both a learning process and as a thing need. I don't like using cloud services where possible so I can set things up to replace having to rely on those such as next loud for storage, plex and some *arr servers for media etc. And I think once you put the hardware and power costs vs what I'd pay for all the subs (particularly cloud storage costs) it comes out cheaper at least with hardware I'm using.

[–] [email protected] 3 points 1 year ago (3 children)

Yes, those are all great uses of it. But could all still be achieved with docker containers running on some machines at home, right?

Have you ever had a situation where features provided by kubernetes (like replicas, load balancers, etc) came in handy?

I'm not criticizing, I'm genuinely curious if there's a use-case for kubernetes for personal self-hosting (besides learning).

load more comments (3 replies)
[–] [email protected] 5 points 1 year ago (1 children)

A lot of people thought this was the case for VMs and docker as well, and now it seems to be the norm.

[–] [email protected] 5 points 1 year ago

A lot of people thought this was the case for VMs and docker as well, and now it seems to be the norm.

Yes, but docker does provide features that are useful at the level of a hobbyist self-hosting a few services for personal use (e.g. reproducibility). I like using docker and ansible to set up my systems, as I can painlessly reproduce everything or migrate to a different VPS in a few minutes.

But kubernetes seems overkill. None of my services have enough traffic to justify replicas, I'm the only user.

Besides learning (which is a valid reason), I don't see why one would bother setting it up at home. Unless there's a very specific use-case I'm missing.

[–] [email protected] 4 points 1 year ago

For me, I find that I learn more effectively when I have a goal. Sure, it's great to follow somebody's "Hello World" web site tutorial, but the real learning comes when I start to extend it to include CI/CD for example.

As far as a use case, I'd say that learning IS the use case.

[–] [email protected] 13 points 1 year ago

Kubernetes is awesome for self hosting, but tbh is superpower isn't multi-node/scalability/clustering shenanigans, it's that because every bit of configuration is just an object in the API, you can really easily version control everything - charts and config in git, tools like Helm make applying changes super easy, use Renovate to do automatic updates, use your CI tool of choice to deploy on commit, leverage your hobby into a DevOps role, profit

[–] [email protected] 11 points 1 year ago

Went swarm instead. I dont need a department of k8s consultants.

[–] [email protected] 9 points 1 year ago (1 children)

I like the concept, but hate the configuration schema and tooling which is all needlessly obtuse (eg. helm)

load more comments (1 replies)
[–] [email protected] 9 points 1 year ago* (last edited 1 year ago) (1 children)

Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I've seen in this thread:

Fewer Letters More Letters
DNS Domain Name Service/System
Git Popular version control system, primarily for code
HA Home Assistant automation software
~ High Availability
HTTP Hypertext Transfer Protocol, the Web
LXC Linux Containers
NAS Network-Attached Storage
SSD Solid State Drive mass storage
SSH Secure Shell for remote terminal access
VPN Virtual Private Network
VPS Virtual Private Server (opposed to shared hosting)
k8s Kubernetes container management package
nginx Popular HTTP server

11 acronyms in this thread; the most compressed thread commented on today has 11 acronyms.

[Thread #82 for this sub, first seen 26th Aug 2023, 23:55] [FAQ] [Full list] [Contact] [Source code]

[–] [email protected] 7 points 1 year ago (1 children)

HA is high availability. Home Assistant is usually shortened to HASS.

[–] [email protected] 6 points 1 year ago (1 children)

It does list that as another possible meaning, if I'm reading the table correctly

load more comments (1 replies)
[–] [email protected] 8 points 1 year ago

The Lemmy instance I'm speaking from right now is running in my k8s cluster.

[–] [email protected] 8 points 1 year ago (3 children)

Nomad all the way. K8s is so bloated. Docker swarm can only do docker. Nomad can do basically anything.

[–] [email protected] 9 points 1 year ago

It’s a damn shame it’s going not free open source, I Just switched my lab over to nomad and consul last year and it has been incredibly smooth sailing.

[–] [email protected] 4 points 1 year ago (1 children)

Nomad is a breath of fresh air after working with k8s professionally.

Don't get me wrong, love k8s, but it's a bit much (until you need it)

load more comments (1 replies)
load more comments (1 replies)
[–] [email protected] 8 points 1 year ago

I am insane and use bare bone LXC.

Stupid ramblings you can probably ignore:

spoiler

Usually though it's because I run most stuff bare metal anyway so LXC is for temporary or random cases where I need a weird dependency or I want to run a niche service.

Only use docker for when I actually want faster setup like docker-osx which does all the vm stuff for running a virtual Mac for you.

I don't really mind docker, but for homelab I just find myself rewriting dockerfile anytime I want to change something which I don't really need to do if I'm not publishing it or even reusing it.

Kubernates is really more effective for actual load services, which you never need in homelab lol. It's great to use to learn k8s cluster, but the resources get eaten fast.


[–] [email protected] 7 points 1 year ago

I run a 2 node k3s cluster. There are a few small advantages over docker swarm, built-in network policies to lock down my VPN/Torrent pod being the main one.

Other than that writing kubernetes yaml files is a lot more verbose than docker-compose. Helm does make it bearable, though.

Due to real-life my migration to the cluster is real slow, but the goal is to move all my services over.

It's not "better" than compose but I like it and it's nice to have worked with it.

[–] [email protected] 7 points 1 year ago (1 children)

I love kubernetes. At the start of the year I installed k3s von my VPS and moved over all my services. It was a great learning opportunity that also helped immensely for my job.

It works just as well as my old docker compose setup, and I love how everything is contained in one place in the manifests. I don't need to log in to the server and issue docker commands anymore (or write scripts / CI stages that do so for me).

load more comments (1 replies)
[–] [email protected] 6 points 1 year ago (6 children)

Love is a strong word, but kubernetes is definitely interesting. I'm finishing up a migration of my homelab from a docker host running in a VM managed with Portainer to one smaller VM and three refurbished lenovo mini PCs running Rancher. It hasn't been an easy road, but I chose to go with Rancher and k3s since it seemed to handle my usecase better than Portainer and Docker Swarm could. I can't pass up those cheap mini PCs

load more comments (6 replies)
[–] [email protected] 6 points 1 year ago

Docker with or without Compose and systemd is good enough for most of my use cases. SaltStack is good enough for config-as-code.

[–] [email protected] 6 points 1 year ago (1 children)

I have a K3OS cluster built out of a bunch of raspberry pis, it works well.

The big reason I like kubernetes is that once it is up and running with git ops style management, adding another service becomes trivial.

I just copy paste one if my e is ting services, tweak the names/namespaces, and then change the specific for the pods to match what their docker configuration needs, ie what folders need mounting and any other secrets or configs.

I then just commit the changes to github and apply them to the cluster.

The process of being able to roll back changes via git is awesome

load more comments (1 replies)
[–] [email protected] 5 points 1 year ago

I've spent the last two weeks on getting a k3s cluster working and I've had nothing but problems but it has been a great catalysts for learning new tools like ansible and load balancers. I finally got the cluster working last night. If anyone else is having wierd issues with the cluster timing out ETCD needs fast storage. Moving my VMs from my spinning rust to a cheap SSD fixed all my problems.

[–] [email protected] 5 points 1 year ago (1 children)

I do aks. I can't say love is the right word for it. Lol

[–] [email protected] 3 points 1 year ago

AKS is a shame. Most of azure, actually. I do my best to find ways around the insanity but it always seems to leak back in with something insane they chose to do for whatever Microsoft reason they have.

[–] [email protected] 5 points 1 year ago

Used k3s to manage my single instance. Lots of gotcha moments to learn! Will add Flux for CD after I decide on how to self-host the Git server

[–] [email protected] 4 points 1 year ago

Here's a slightly different story: I run OpenBSD on 2 bare-metal machines in 2 different physical locations. I used k8s at work for a bit until I steered my career more towards programming. Having k8s knowledge handy doesn't really help me so much now.

On OpenBSD there is no Kubernetes. Because I've got just two hosts, I've managed them with plain SSH and the default init system for 5+ years without any problems.

[–] [email protected] 4 points 1 year ago (2 children)
load more comments (2 replies)
[–] [email protected] 3 points 1 year ago

I feel like it took me quite a while to get the hang of Docker, and Kubernetes on a general look seems all that much more daunting! Hopefully one day I can break it down into smaller pieces so I can get started with it!

[–] [email protected] 3 points 1 year ago

Running an RKE cluster as VMs on my ceph+proxmox cluster. Using Rook and external ceph as my storage backend and loving it. I haven't fully migrated all of my services, but thus far it's working well enough for me!

[–] [email protected] 2 points 1 year ago

I like the Kubes

[–] [email protected] 2 points 1 year ago

My homelab is a 2 node Kubernetes cluster (k3s, raspberry pis), going to scale it up to 4 nodes some day when I want a weekend project.

Built it to learn Kubernetes while studying for CKA/CKD certification for work where I design, implement and maintain service architectures running in Kubernetes/Openshift environments every day. It's relatively easy for me to manage Kubernetes for my home lab, but It's a bit heavy and has a steep learning curve if you are new to it which (understandably) puts people off it I think. Especially for homelab/selfhosting use cases. It's a very valuable (literally $$$) skill if you are in that enterprise space though.

load more comments
view more: next ›