20
submitted 1 day ago* (last edited 1 day ago) by emotional_soup_88@programming.dev to c/linux@lemmy.ml

I have three Ethernet interfaces, namely eth[0...2]. eth0 is connected to my VPN router and eth1 and eth2 are connected to my public facing router. eth0 is the standard interface that I normally let my Linux instance use. I now want to set up a container that hijacks (makes unavailable to the host) eth1 or eth2 in order to run various services that need to be reachable from WAN through a Wireguard tunnel.

I am aware that the man pages for systemd-nspawn say that it is primarily meant to be a test environment and not a secure container. Does anybody have experience with and/or opinions on this? Should I just learn how to use Docker?

For now, I am only asking about any potential security implications, since I don't understand how container security works "under the hood". The network portion of the setup would be something like:

Enabling forwarding kernel parameters on the host

Booting the container with systemd-nspawn -b -D [wherever/I/put/the/container] --network-interface=[eth1 or 2]

Then, managing the container's network with networkd config files, including enabling IPForward and IPMasquerade

Then, configuring wireguard according their official guides or, for instance, the Arch wiki.

Any and all input would be appreciated! 😊

top 17 comments
sorted by: hot top new old
[-] doodoo_wizard@lemmy.ml 5 points 17 hours ago

Cons:

It’s not gonna work

It’s not well documented

No one else does it so it’s hard to ask for help

You don’t even need a container for this, just use the routing table

Pros:

New project

No chance to be led astray by stackoverflow or reddit

Contributing to systemd development by testing new features

[-] emotional_soup_88@programming.dev 1 points 16 hours ago

Well, now I just have to try it!

I have no idea how to tell specific processes or shells to use a specific interface, while also forbidding others to use the same interface... Which is why I thought, "but I can force a container to use a specific interface! Gotcha!"

I'm almost there, I think. I managed to get my phone and my nspawn-ed wireguard interface to shake hands. I just need to tweak the forwarding and nat-ing rules in my firewall. After I touch grass. Oh, my back...

[-] doodoo_wizard@lemmy.ml 2 points 15 hours ago

The usual way to force a program or process to use a specific interface is called binding. It used to be something you really had to know your stuff to use correctly but nowadays there are a million tutorials out there.

With systemd you can use a pretty well tested and reliable section of the namespace implementation for just establishing a namespace and binding both the target interface and program to it, but you can also just use iptables with a user and mangling.

Nowadays you have nftables, but it does the same thing.

[-] talkingpumpkin@lemmy.world 9 points 23 hours ago* (last edited 23 hours ago)

Should I just learn how to use Docker?

Since you are not tied to docker yet, I'd recommend going with podman instead.

They are practically the same and most (all?) docker commands work on podman too, but podman is more modern (second generation advantage) and has a better reputation.

As for passing a network interface to a container, it's doable and IIRC it boils down to changing the namespace on the interface.

Unless you have specific reasons to do that, I'd say it's much easier to just forward ports from the host to containers the "normal" way.

There's no limit to how many different IPs you can assign to a host (you don't need a separate interface for each one) and you can use a given port on different IPs for different things .

For example, I run soft-serve (a git server) as a container. The host has one "management" IP (92.168.10.243) where openssh listens on port 22 and another IP (192.168.10.98) whose port 22 is forwarded to the soft-serve container via podman run [...] -p 192.168.10.98:22:22).

[-] emotional_soup_88@programming.dev 1 points 23 hours ago

Thank you for the suggestion on Podman! The thing is, since the VPN is running on one of my routers (connected to eth0), and since I want the public facing interfaces (1 and 2) not to use that router, I'm going to make use of one of those two extra interfaces anyway. Either way, good advice in adding multiple addresses to the same interface!

[-] truthfultemporarily@feddit.org 6 points 23 hours ago

This feels like a hacky solution.

Why not use VLANs? You can have just one physical interface and then have VLAN interfaces. You can then use a bridge to have every container have their own interface and IP that is attached to a specific VLAN.

[-] emotional_soup_88@programming.dev 2 points 23 hours ago

I'd absolutely do that if I didn't already have two extra physical interfaces. :)

[-] a_fancy_kiwi@lemmy.world 7 points 1 day ago* (last edited 1 day ago)

Should I just learn how to use Docker?

Yes. I put off learning it for so long and now can’t imagine self-hosting anything without it. I think all you have to do is set a static IP to the NIC from your router and then specify the IP and port in a docker-compose.yml file:

Ex: IP-address:external-port:container-port

services:
    app-name:
        ports
            - 192.168.1.42:3000:3000
[-] MonkderVierte@lemmy.zip 2 points 1 day ago* (last edited 23 hours ago)

Would ~~N~~LXC be more inconvenient? I don't trust Dockers pseudo-containering.

[-] TVA@thebrainbin.org 2 points 19 hours ago

Unless you're downloading a prebuilt LXC, you'd still have to do all the manual install yourself.

If you do download a prebuilt one, then you'll need to do the updating yourself, like you would a normal application, including ensuring you keep dependencies up to date and all that.

Both have their pros and cons and I use each depending on what I'm doing (and basically all of my dockers are running in their own LXC containers, which I find to be the best of both worlds).

FWIW, I don't download any prebuilt LXC anymore other than the base 'Ubuntu' or 'Debian' ones ... the ones in ProxMox that have the prebuilt apps were a pain to update for me, especially since I had no idea how they were actually installed and most of the times they didn't have package manager installations or curl installed and it was just way more trouble than it was worth.

ProxMox does now have a built in containerized docker implementation that will use an LXC and you can just provide it the docker package details, but, it's still in beta and I don't know that it's ready to be depended on yet.

[-] MonkderVierte@lemmy.zip 1 points 19 hours ago* (last edited 19 hours ago)

Thanks. How about taking a Docker container and converting it's spec?

[-] TVA@thebrainbin.org 2 points 19 hours ago

Sorry, not 100% sure what you mean "converting its spec"

If you mean take an existing docker and move it to a standard installation, that would depend on what all is needed. Some installations include a ton of other dockers with databases and such and you'd basically need to move them all independently and ensure the programs talk to each other properly.

For others, it's be as simple as making sure the contents of your original docker data folder is in the right place when you launch the app and you're done.

[-] MonkderVierte@lemmy.zip 1 points 19 hours ago* (last edited 17 hours ago)

Oof, okay. Although you could probably just merge the dependencies in your LXC container? It works like this with creating appimages.

About "converting its spec": i assumed the main friction point would be the LXC tooling not knowing Dockerfiles. Forgot the name of the containers specification file (Dockerfile), since it was a while ago since i last looked into containering.

Huh, there's also "Apptainer" now? Portable and reproducible, seems interesting.

[-] a_fancy_kiwi@lemmy.world 4 points 23 hours ago

I’m assuming you mean LXC? It’s doable but without some sort of orchestration tools like Nix or Ansible, I imagine on-going maintenance or migrations would be kind of a headache.

Sweet! I'll start reading up on Docker, especially as it sounds like it has become an integral part of your self-hosting. :)

[-] a_fancy_kiwi@lemmy.world 6 points 23 hours ago

You might come across docker run commands in tutorials. Ignore those. Just focus on learning docker compose. With docker compose, the run command just goes into a yaml file so it’s easier to read and understand what’s going on. Don’t forget to add your user to the docker group so you aren’t having to type sudo for every command.

Commands you’ll use often:

docker compose up - runs container

docker compose up -d - runs container in headless mode

docker compose down - shuts down container

docker compose pull - pulls new images

docker image list - lists all images

docker ps - lists running containers

docker image prune -a - deletes images not being used by containers to free up space

[-] emotional_soup_88@programming.dev 2 points 23 hours ago

Thanks! What a sweet little handbook for getting started! :D

this post was submitted on 04 Feb 2026
20 points (95.5% liked)

Linux

62340 readers
1534 users here now

From Wikipedia, the free encyclopedia

Linux is a family of open source Unix-like operating systems based on the Linux kernel, an operating system kernel first released on September 17, 1991 by Linus Torvalds. Linux is typically packaged in a Linux distribution (or distro for short).

Distributions include the Linux kernel and supporting system software and libraries, many of which are provided by the GNU Project. Many Linux distributions use the word "Linux" in their name, but the Free Software Foundation uses the name GNU/Linux to emphasize the importance of GNU software, causing some controversy.

Rules

Related Communities

Community icon by Alpár-Etele Méder, licensed under CC BY 3.0

founded 6 years ago
MODERATORS