0

I need to ensure outbound traffic from a specific (Docker) container is tunneled through my (WireGuard) VPN (running on my VPS) to exit to the Internet with my VPS external IP, but I am not being successful.

This is my environment:

  • VPS:
    • WireGuard gateway interface on wg0 10.0.80.1/24
    • Internet-facing interface eth0, masquerading enabled
    • IP forwarding enabled
  • home server:
    • WireGuard tunnel to VPS on wg0 10.0.80.200/24
    • LAN-facing interface enp45s0 10.0.1.200, with Internet access through 10.0.1.1
    • docker82 10.0.82.1/24: bridge for Docker containers
      • container 10.0.82.14 that needs to be tunneled
      • Docker iptables is disabled, masquerading and forwarding is set up manually with nftables, confirmed working before what is described below
    • IP forwarding enabled

I have already tested using the VPS as a default gateway on other devices, with WireGuard's AllowedIPs = 0.0.0.0/0, ::/0, so I know masquerading and forwarding on the VPS do work.

I tried setting up source-based routing on my home server like this:

echo "100 wireguard" >>/etc/iproute2/rt_tables
ip rule add from 10.0.82.14 table wireguard
ip route add default via 10.0.80.1 dev wg0 table wireguard
ip route add 10.0.1.1 dev enp45s0 table wireguard # needed for Docker DNS resolver

Inside the container (10.0.82.14), I do reach 10.0.80.1 but pinging a public IP results in a Destination Host Unreachable response:

a468bb1b5494:~# ip route
default via 10.0.82.1 dev eth0
10.0.82.0/24 dev eth0 proto kernel scope link src 10.0.82.14

a468bb1b5494:~# ping 10.0.80.1
PING 10.0.80.1 (10.0.80.1) 56(84) bytes of data.
64 bytes from 10.0.80.1: icmp_seq=1 ttl=63 time=9.06 ms
64 bytes from 10.0.80.1: icmp_seq=2 ttl=63 time=9.28 ms
^C
--- 10.0.80.1 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1001ms
rtt min/avg/max/mdev = 9.057/9.170/9.284/0.113 ms

a468bb1b5494:~# ping 8.8.8.8
PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.
From 10.0.82.1 icmp_seq=1 Destination Host Unreachable
From 10.0.82.1 icmp_seq=2 Destination Host Unreachable
^C
--- 8.8.8.8 ping statistics ---
2 packets transmitted, 0 received, +2 errors, 100% packet loss, time 1016ms

tcpdumping on docker82 from the container host (home server):

laxis@nuc:~$ sudo tcpdump -n -i docker82 icmp
listening on docker82, link-type EN10MB (Ethernet), snapshot length 262144 bytes
14:21:41.229730 IP 10.0.82.14 > 10.0.80.1: ICMP echo request, id 28, seq 1, length 64
14:21:41.351761 IP 10.0.80.1 > 10.0.82.14: ICMP echo reply, id 28, seq 1, length 64
14:21:42.231232 IP 10.0.82.14 > 10.0.80.1: ICMP echo request, id 28, seq 2, length 64
14:21:42.241748 IP 10.0.80.1 > 10.0.82.14: ICMP echo reply, id 28, seq 2, length 64
14:21:43.488539 IP 10.0.82.14 > 8.8.8.8: ICMP echo request, id 29, seq 1, length 64
14:21:43.488686 IP 10.0.82.1 > 10.0.82.14: ICMP host 8.8.8.8 unreachable, length 92
14:21:44.515474 IP 10.0.82.14 > 8.8.8.8: ICMP echo request, id 29, seq 2, length 64
14:21:44.515576 IP 10.0.82.1 > 10.0.82.14: ICMP host 8.8.8.8 unreachable, length 92

tcpdumping on wg0 from the home server:

laxis@nuc:~$ sudo tcpdump -n -i wg0 icmp
listening on wg0, link-type RAW (Raw IP), snapshot length 262144 bytes
14:23:40.848287 IP 10.0.82.14 > 10.0.80.1: ICMP echo request, id 30, seq 1, length 64
14:23:40.857968 IP 10.0.80.1 > 10.0.82.14: ICMP echo reply, id 30, seq 1, length 64
14:23:41.849639 IP 10.0.82.14 > 10.0.80.1: ICMP echo request, id 30, seq 2, length 64
14:23:41.859707 IP 10.0.80.1 > 10.0.82.14: ICMP echo reply, id 30, seq 2, length 64
14:23:43.230307 IP 10.0.82.14 > 8.8.8.8: ICMP echo request, id 31, seq 1, length 64
14:23:44.259534 IP 10.0.82.14 > 8.8.8.8: ICMP echo request, id 31, seq 2, length 64

The wg0 interface on the VPS side sees none of the echo request packets for 8.8.8.8.

At this point I don't understand if the custom routing table is having any effect; it looks like the network stack is somehow indeed sending the packet through wg0, but still internally replying as if the host is unreachable.

Does anyone have any insight as to what this could be related to?

Thank you

2
  • "I have already tested using the VPS as a default gateway on other devices, with WireGuard's AllowedIPs = 0.0.0.0/0, ::/0, " : it's not clear if currently that's the case. Is it? Else can you provide your Docker system's WireGuard configuration? As well as ip -br link; ip -br addr; ip route; ip rule? And since it should be small, you could as well provide nft list ruleset so your problem can be completely reproducible.
    – A.B
    Commented Mar 24 at 14:39
  • Exactly, on my home server AllowedIPs of the VPS peer was set to allow only private subnets, which was the culprit! I just stumbled upon another answer which led me to the solution, which I will now post as an answer. Thank you for you interest
    – LaXiS
    Commented Mar 24 at 14:51

1 Answer 1

0

The solution was more obvious than expected... I got inspired by this answer which at the end remarks about setting AllowedIPs on the WireGuard peer configuration to 0.0.0.0/0.

The resulting configuration is simpler and was even included as an example in the wg-quick manual: just configure WireGuard to use a separate routing table (to avoid replacing the default gateway route of the system, which I don't want in this case) and then use that table for source-based routing.

/etc/wireguard/wg0.conf:

[Interface]
PrivateKey = ...
Address = 10.0.80.200/32, fd80::200/128
Table = 100 # 100 = wireguard table
PostUp = ip route add 10.0.1.1 dev enp45s0 table wireguard
PostUp = ip rule add from 10.0.82.14 table wireguard
PreDown = ip rule del from 10.0.82.14 table wireguard
PreDown = ip route del 10.0.1.1 dev enp45s0 table wireguard

[Peer]
PublicKey = ...
Endpoint = ...
AllowedIPs = 0.0.0.0/0, ::/0

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .