this post was submitted on 13 Dec 2023
234 points (98.0% liked)

Selfhosted

39893 readers
399 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 1 year ago
MODERATORS
 

I'm a retired Unix admin. It was my job from the early '90s until the mid '10s. I've kept somewhat current ever since by running various machines at home. So far I've managed to avoid using Docker at home even though I have a decent understanding of how it works - I stopped being a sysadmin in the mid '10s, I still worked for a technology company and did plenty of "interesting" reading and training.

It seems that more and more stuff that I want to run at home is being delivered as Docker-first and I have to really go out of my way to find a non-Docker install.

I'm thinking it's no longer a fad and I should invest some time getting comfortable with it?

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 2 points 10 months ago (1 children)

Saves time, minimal compatibility, portability and you can update with 2 commands There's really no reason not to use docker

[–] [email protected] 1 points 10 months ago (1 children)

But I can't really tinker IN the docker-image, right? It's maintained elsewhere and I just get what i got. But with way less tinkering? Do I have control over the amount/percentage of resources a container uses? And could I just freeze a container, move it to another physical server and continue it there? So it would be worth the time to learn everything about docker for my "just" 10 VMs to replace in the long run?

[–] [email protected] 2 points 10 months ago* (last edited 10 months ago) (1 children)

You can tinker in the image in a variety of ways, but make sure to preserve your state outside the container in some way:

  1. Extend the image you want to use with a custom Dockerfile
  2. Execute an interactive shell session, for example docker exec -it containerName /bin/bash
  3. Replace or expose filesystem resources using host or volume mounts.

Yes, you can set a variety of resources constraints, including but not limited to processor and memory utilization.

There's no reason to "freeze" a container, but if your state is in a host or volume mount, destroy the container, migrate your data, and resume it with a run command or docker-compose file. Different terminology and concept, but same result.

It may be worth it if you want to free up overhead used by virtual machines on your host, store your state more centrally, and/or represent your infrastructure as a docker-compose file or set of docker-compose files.

[–] [email protected] 2 points 10 months ago (1 children)

Hm. That doesn't really sound bad. Thanks man, I guess I will take some time to read into it. Currently on proxmox, but AFAIK it does containers too.

[–] [email protected] 2 points 10 months ago (1 children)

It's really not! I migrated rapidly from orchestrating services with Vagrant and virtual machines to Docker just because of how much more efficient it is.

Granted, it's a different tool to learn and takes time, but I feel like the tradeoff was well worth it in my case.

I also further orchestrate my containers using Ansible, but that's not entirely necessary for everyone.

[–] [email protected] 1 points 10 months ago (1 children)

I only use like 10 VMs, guess there's no need for overkill with additional stuff. Though I'd like a gui, there probably is one for docker? Once tested a complete os with docker (forgot the name) but it seemed very unfriendly and ovey convoluted.

[–] [email protected] 2 points 10 months ago (1 children)

There's a container web UI called Portainer, but I've never used it. It may be what you're looking for.

I also use a container called Watchtower to automatically update my services. Granted there's some risk there, but I wrote a script for backup snapshots in case I need to revert, and Docker makes that easy with image tags.

There's another container called Autoheal that will restart containers with failed healthchecks. (Not every container has a built in healthcheck, but they're easy to add with a custom Dockerfile or a docker-compose.)

[–] [email protected] 2 points 10 months ago (1 children)

Thanks for the tips! But did i get it right here? A container can has access to other containers?

[–] [email protected] 2 points 10 months ago* (last edited 10 months ago) (1 children)

The Docker client communicates over a UNIX socket. If you mount that socket in a container with a Docker client, it can communicate with the host's Docker instance.

It's entirely optional.

[–] [email protected] 2 points 10 months ago (1 children)

Ah okay. Sounds safe enough. Thanks again :-)

[–] [email protected] 2 points 10 months ago