Skip to main content
deleted 60 characters in body
Source Link
Ramhound
  • 43.1k
  • 35
  • 107
  • 137

I would appreciate any help/hints on this. I'm stumped.

I would appreciate any help/hints on this. I'm stumped.

Source Link

Using Wireguard to forward traffic from public facing VPS to private server

TL;DR; I'm trying to setup a bunch of internet facing services (web, smtp, other) on a machine running on my LAN and forward traffic to it from a public facing VPS machine using Wireguard in such a way that the source IPs for that traffic are preserved (i.e. not breaking fail2ban and friends). It's kicking my butt...

I'm trying to rework my homelab/homeserver setup a bit and I'm well into the pulling my hair out phase.

I have a single Ubuntu server machine (we'll call this "backend") that hosts a bunch of services (HTTP, HTTPS, SMTP, IMAP) via. Docker. This machine is hidden behind NAT, and I'd like it to stay that way.

I have a lightweight VPS machine (also Ubuntu) (we'll call this "frontend"). Let's pretend it has the well known IP of 123.456.789.123.

I would like to create a Wireguard tunnel between the two machines and have inbound traffic to the "frontend" VPS (80,443,25,465,587,993, ...) be forwarded to the appropriate service on the "backend" in such a way that:

  • I can physically move the backend machine and it'll "just work" (i.e. the "backend" initiates the tunnel)
  • The source IP address for connections is visible to the underlying services on the VPS (i.e. I don't want to just use something like rinetd, Caddy, or masquerade connections since that won't work for the non-proxy friendly traffic)
  • I don't want ALL traffic from the backend machine to be dumped through the frontend, only stuff that originated from the frontend.
  • I want to be able to access these services from other containers on the VPS (i.e. hairpin NAT type thing - specifically several of the containers need to connect to the SMTP server container)

Here is a rough picture of what I have in mind...

Here is a diagram of what I have in mind

The closest resource I've found on this is the following (specifically the example on Policy Routing) but I think my use of Docker is complicating this.

https://www.procustodibus.com/blog/2022/09/wireguard-port-forward-from-internet/#default-route

Also this post:

https://unix.stackexchange.com/questions/708264/vps-port-forwarding-without-snat-masquerade-using-source-based-routing

(I'll focus on HTTP for the samples below)

With the following Wireguard configuration on the "frontend":

[Interface]
PrivateKey = ###
Address = 10.99.1.2
ListenPort = 51822

# packet forwarding
PreUp = sysctl -w net.ipv4.ip_forward=1

# port forwarding (HTTP) // repeat for each port
PreUp = iptables -t nat -A PREROUTING -i eth0 -p tcp --dport 80 -j DNAT --to-destination 10.99.1.1
PostDown = iptables -t nat -D PREROUTING -i eth0 -p tcp --dport 80 -j DNAT --to-destination 10.99.1.1

# remote settings for the private server
[Peer]
PublicKey = ###
AllowedIPs = 10.99.1.1

And the wireguard configuration for the "backend".

[Interface]
PrivateKey = ####
Address = 10.99.1.1
Table = vpsrt

PreUp = sysctl -w net.ipv4.ip_forward=1

PreUp = ip rule add from 10.99.1.1 table vpsrt priority 1
PostDown = ip rule del from 10.99.1.1 table vpsrt priority 1

# remote settings for the public server
[Peer]
PublicKey = ####
Endpoint = 123.456.789.123:51822
AllowedIPs = 0.0.0.0/0
PersistentKeepalive = 25

Interestingly, with the above I can ping the "backend" from the "frontend" but not the reverse. I'm not sure if that's a first indication that I'm doing something wrong.

With the above configuration, if I run a webserver directly on the "backend" machine (i.e. python3 -m http.server 80), it works like a charm.

Specifically:

  • From Internet: curl http://123.456.789.123 works (source IP = client IP! Yay!)
  • From Frontend: curl http://10.99.1.1 works (source IP = 10.99.1.2)
  • From Backend: curl http://10.99.1.1 works (source IP = 10.99.1.1)
  • From Backend: curl http://123.456.789.123 mostly works (source IP = public IP for my LAN, would prefer this traffic was kept internal)

Great success!

Problem: I can't make this work with containerized services running on the "backend"

However... if I were to then move my HTTP server into a container, it stops working...

services:
  whoami:
    image: "containous/whoami"
    ports:
     - 80:80
    restart: always

I think the issue is that I don't really have a service running on port 80 on the backend machine but, instead, I have a service running on some Docker assigned internal IP and then a iptables rule forwarding traffic on port 80 to that container. Unfortunately, I still have a bunch of gaps in my knowledge here and I'm not sure how to troubleshoot or resolve this.

Things I've tried:

  • Forwarding all traffic from the "backend" to the "frontend" (removing the policy based routing table and PreUp/PostDown from the "backend" configuration and adding masquerading to the "frontend"). This got close to working but now I'm dumping all my traffic through the VPS (meh) and hairpinning doesn't work.
  • I tried assigning static IPs to the containers for the Mail and Web Server. I changed my "frontend" configuration to use those IPs, and added them to the AllowedIPs on the server. I couldn't get this working.

I would appreciate any help/hints on this. I'm stumped.