Easily doable in docker using the network_mode: "service:VPN_CONTAINER"
configuration (assuming your VPN is running as a container)
Programming
Welcome to the main community in programming.dev! Feel free to post anything relating to programming here!
Cross posting is strongly encouraged in the instance. If you feel your post or another person's post makes sense in another community cross post into it.
Hope you enjoy the instance!
Rules
Rules
- Follow the programming.dev instance rules
- Keep content related to programming in some way
- If you're posting long videos try to add in some form of tldr for those who don't want to watch videos
Wormhole
Follow the wormhole through a path of communities [email protected]
That works but sucks when you redeploy the Vpn container iirc. I don't do this anymore.
(Now I just use lxc containers with docker inside, and I'll set the default gateway of the lxc to another lxc that is a gateway for a VPN network)
It is very doable.
Take a look at https://github.com/qdm12/gluetun - it’s what I use for this.
second gluetun, easy to use and configure.
Gluetun, is overkill if you already have a working setup. Your system is able to handle this in a much simple way with built in tools.
You can use systemd
to restrict some daemon to your your VPN IP. For instance here's an example of doing that with transmission: override of the default unit by using the following command:
systemctl edit transmission-daemon.service
Then type what you need to override:
[Service]
IPAddressDeny=any
IPAddressAllow=10.0.0.1 # --> your VPN IP here
Another option, might be to restrict it to a single network interface:
[Service]
RestrictNetworkInterfaces=wg0 # --> your VPN interface
Save the file and run systemctl daemon-reload
followed by systemctl restart transmission-daemon.service
and it should be applied.
This is a simple and effective solution that doesn't require more stuff.
You don't even need full-fledged containers for that btw.
Learn how to script with ip netns
and veth
.
Do you have a link at hand on how start a process within a specific veth by chance? Own name spaces are easy enough and a lot of tutorials but I don't want my programs to ever be not in the vpn space, not at startup not as fail over etc.
That's the reason why I stuck with the container setup, only for gluetun plus vpned services.
start a process within a specific veth
That sentence doesn't make any sense.
Processes run in network namespaces (netns), and that's exactly what ip netns exec
does.
A newly created netns via ip netns add
has no network connectivity at all. Even (private) localhost is down and you have to run ip link set lo up
to bring it up.
You use veth
pairs to connect a virtual device in a network namespace, with a virtual device in the default namespace (or another namespace with internet connectivity).
You route the VPN server address via the netns veth device and nothing else. Then you run wireguard/OpenVPN inside netns.
Avoid using systemd since it runs in the default netns by default, even if called from a process running in another netns.
The way I do it is:
- A script for all the network setup:
ns_con AA
- A script to run a process in a netns (basically a wrapper around
ip netns exec
):
ns_run AA <cmd>
- Run a termnal app using 2.
- Run a tmux session on a separate socket inside terminal app. e.g.
export DISPLAY=:0 # for X11
export XDG_RUNTIME_DIR=/run/user/1000 # to connect to already running pipewire...
# double check this is running in AA ns
tmux -f -f <alternative_config_file_if_needed> -L NS_AA
I have this in my tmux config:
set-option -g status-left "[#{b:socket_path}:#I] "
So I always know which socket a tmux session is running on. You can include network info there if you're still not confident in your setup.
Now, I can detach that tmux session. Reattaching with tmux -L NS_AA attach
from anywhere will give me the session still running in AA
.
Yeah I had a brainfart, meant namespace...
And thanks a lot for this writeup I think with your help I figured out where I went wrong in my train of thought and I'll give it another try next week when I have a bit downtime.
The time you took to write this is highly appreciated! ♥