You want your docker container’s persistent data mounted to real locations. I use the volumes for non-persistent stuff.
You want your real locations to have a file system that can snapshot (ZFS, BTRFS).
Then you can dump the superior Postgres databases and for all other databases and data, you stop the containers, snapshot, start the containers (limits downtime!), and then back up that snapshot. Thanks to snapshot, you don’t need to wait until the backup is done to bring the containers back up for data consistency. For backup I use restic, it does seem to work well, and has self-check functions so that’s nice. I chose restic instead of just sending snapshots because of its coupled encryption and checks, which allow for reliable data integrity on unreliable mediums (anyone, even giant providers, could blackhole bits of your backup!). I copy over the restic binary that made the backup using encrypted rclone, the encryption there prevents anyone (the baddies? Idk who’d target me but it doesn’t matter now!) from mucking with the binary if you did need that exact version to restore from for some reason.
Note I do not dump SQL or the like, they’re offline and get snapshotted in a stable state. The SQL dump scene was nasty, esp compared to Postgres’ amazingly straightforward way (while running!). I didn’t bother figuring out SQL dump or restore.
All of your containers should have specific users for them, specify the UID/GID so they’re easily recreatable in a restore scenario. (The database containers get their own users too)
Addendum for the specific users: Make an LXC container run by a specific user and put the docker container in it if the docker container is coded by F tier security peeps and hard requires root. Or use podman, it is competent and can successfully lie to those containers.
I don’t test my backups because the time to do so is stupid high thanks to my super low internet speeds. I tested restoring specific files with restic when setting it up and now I rely on the integrity checks (2GB check a day) to spot check everything is reliable. I have a local backup as well as a remote, the local is said snapshot used to make the restic remote backup. The snapshot is directly traversable and I don’t need to scrutinize it hard. If I had faster internet, I’d test restoring remote restic once a year probably. For now I try to restore a random file or small directory once a year.
Hope the rant helps
I do not know of Internet Comment Etiquette, sorry to disappoint! It’s a username that’s humorous to me and fits a core tenant of mine
Do remember (or put in the .env) the user/pass for your db’s, but they don’t matter much if you know them.
I’m talking about the process, the ‘user: 6969:6969’ in the docker.compose file. If it’s not there the container runs as the user running docker, and unless you’ve got non-root docker going it’ll run the containers as root. Which could be bad, so head that off if you can! Overall, I’d say it’s a low priority, but a real priority. Some naughty container could do bad things with root privilege and some docker vulnerabilities. I’ve never heard of it that kind of attack in the self hosted community, but as self hosting gains traction I worry a legit container will get an attack slipped in somehow and wreck (prob ransomware) root docker installations.
First priority is backup - then you can worry about removing root containers (if you haven’t already done so!).