6

I am working on the home project and would like to give my machine essentially the public IP address that is assigned to my VPS in the cloud.

At home I have a public IP address and I have set up the Wireguard server, external port 1194 is being forwarded to that machine, so the WG clients from the internet can connect to it.

Now, I have a VPS in the cloud with static public IP address(let's say with an IP 44.44.44.44) and also I have a machine in the local network (let's say with an IP 192.168.1.3) and I want the machine in a local network handle traffic as it would actually have the public IP(44.44.44.44) of the VPS in the cloud. That means all incoming traffic coming to the VPS is routed to the machine(192.168.1.3) in the local network.

I had no issues of connecting the WG server and the VPS using Wireguard, or even redirecting all the traffic from VPS trough WG server(using iptables NAT rules), but I am stuck at the part of routing incoming traffic from the VPS preserving the original IP addresses to the machine in the local network.

My client(VPS) WG config:

[Interface]
PrivateKey = dGhpcyBpcyBub3QgdGhlIGtleSA7KQ==
Address = 10.66.66.2/24
#This was part of the test usin
#PostUp = iptables -A FORWARD -i wg0 -j ACCEPT; iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE;
#PostDown = iptables -D FORWARD -i wg0 -j ACCEPTl iptables -t nat -D POSTROUTING -o eth0 -j MASQUERADE

[Peer]
PublicKey = dGhpcyBpcyBub3QgdGhlIGtleSBlYXRoZXIgOyk=
Endpoint = 33.33.33.123:1194
AllowedIPs = 10.66.66.1/32

My server(WG) config:

[Interface]
Address = 10.66.66.1/24
ListenPort = 1194
PrivateKey = dGhpcyBpcyBub3QgdGhlIGtleSA7KQ==
PostUp = iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
PostDown = iptables -t nat -D POSTROUTING -o eth0 -j MASQUERADE

[Peer]
PublicKey = dGhpcyBpcyBub3QgdGhlIGtleSBlYXRoZXIgOyk=
AllowedIPs = 10.66.66.2/32

Is my understanding correct, that the best course of action would be to connect the machine (192.168.1.3) to the WG server (let's say using IP 10.66.66.3) and then on the VPS create a static route for eth0 to 10.66.66.3?

This is a simple drawing of the network and what I am trying to achieve: enter image description here

7
  • 1
    Does the VPS have two public addresses or just one? In other words, if you forward all traffic for 44.44.44.44 to another system, how will you reach the VPS itself? Commented Dec 10, 2020 at 13:46
  • That us good question. VPS has only one public IP address. One way to reach it is from WG (reaching 10.66.66.2) or tagging the packets that are WG related from all other traffic.
    – JohnDow
    Commented Dec 10, 2020 at 13:56
  • 1
    Having only one IP address on the VPS will make everything very complex: it will then exist at two places for two roles. Anyway, do you really need to have wgserver involved? Can't you have a direct WireGuard tunnel between the server and the tunnel? Also I'm puzzled why you fix the port on wgserver and not on vps. The opposite (having wgserver behind nat initiate first the connection to vps with a public IP address) would likely not even require to forward the port 1194 on the router
    – A.B
    Commented Dec 11, 2020 at 13:07
  • I don't need to involve WG per say, but using only 1 interface on the VPS would be a great solution, as a VPS provider provides only one interface.
    – JohnDow
    Commented Dec 12, 2020 at 13:41
  • I understand that problem of redirecting incoming traffic from one interface to WG using the same interface. But my understanding was that by tagging WG traffic you can separate it and make sure there is no loop in the network.
    – JohnDow
    Commented Dec 12, 2020 at 16:13

1 Answer 1

10
+100

Presentation

Here's an answer requiring minimal support from the server itself, but multiple routing tweaks on the WireGuard server and especially on the VPS. I chose to keep OP's setup in place, including keeping the WireGuard tunnel between the VPS and the WGserver.

Proper integration of the configuration to the specific distribution(s) in use won't be dealt in this answer (but hints are given).

Requires kernel >=4.17 for the ipproto/dport/sport policy routing feature allowing to do exceptions without requiring iptables and/or marks.

Extends fib rule match support to include sport, dport and ip proto match (to complete the 5-tuple match support). Common use-cases of Policy based routing in the data center require 5-tuple match

This setup doesn't use iptables nor NAT: server's 44.44.44.44 address will be routed both ways through the tunnel and (one could say despite) VPS' 44.44.44.44 address. One could imagine even IPsec's ESP packets used by server would work correctly through this tunnel (not tested). It mostly relies on additional routing tables (ip route add table XXX ...) selected by policy routing (ip rule add ... lookup XXX).

Some considerations to have in mind:

  • Addresses of the VPS and the router must not dynamically change, because several rules depend on them. That means the automatic roaming feature of WireGuard wouldn't work here, because the required ip rule entries reference specific IP addresses and would have to be changed too. If VPS changes but is known to be in a single address block, ip rules and routes can probably be changed to accommodate for a route/netmask rather than just for 44.44.44.44. If it's completely random, then one can leave selectors to check only port 1194.

  • VPS' own connectivity is now very limited: it's just turned into a limited router. In addition to being a tunnel endpoint, only SSH will be available, nothing else, not even DNS. Updates to the VPS should be done using a proxy that could be set on WGserver, on server, or through an SSH tunnel. This should be configured so as it avoids using the tunnel more than needed, but would even work if not (uselessly doubling tunnel's traffic).

  • VPS' own firewall rules might have to be checked. As it will route traffic, this traffic is still subject to iptables/nftables if set in place and more importantly to conntrack for stateful rules. conntrack doesn't know about routing, so will consider 44.44.44.44 a unique node. This should not matter much as local traffic will use INPUT/OUTPUT chains, while routed traffic will use the FORWARD chain, and no same kind of traffic will exist in both, since VPS is now limited to tunnel and SSH. So I didn't add iptables rules in the raw table to split conntrack's into zones or prevent tracking but that's still an option.

Other choices could have been possible:

  • applying multiple layers of NAT, but I'd rather avoid NAT whenever possible. It also wouldn't transparently achieve the goal of "I want the machine in a local network handle traffic as it would actually have the public IP" since for example NAT is known to disrupt IPSec's native ESP traffic, and would require handling correctly the exact same traffic between the same remote Internet system and then VPS or server (probably with marks and conntrack zones).
  • some help on VPS with the use of a separate network namespace and stealing non-tunnel packets (using tc or nftables, probably not iptables), but this would still require most of the tweaks and some more.

Non-exhaustive list of routing tweaks used:

  • on both tunnel endpoints:

    • add routing exception for the tunnel itself on both sides so they are still routed normally
  • on WGserver

    • allows any IP address from the WireGuard tunnel: to allow server to receive all of Internet rather than only 10.66.66.2.
    • don't use NAT rules
  • on VPS:

    • also allow 44.44.44.44 from the WireGuard tunnel in addition to 10.66.66.1
    • move the rule calling the local routing table to a higher precedence to make room for exceptions even to the local routing table, allowing VPS to route an IP address belonging to itself. That's probably the most important alteration.
    • add a proxy ARP entry for 44.44.44.44 to make VPS still answer its router's ARP request for it despite routing it (as would require any more common proxy ARP case)
  • on server

    • server can be optionally made multi-homed, preference given for using 44.44.44.44 through the tunnel by default (thus not incurring issues for UDP services not multi-homed aware).

This answer had to be tested. It could not have worked on the first try. Most testing was done using tcpdump and ip route get. It doesn't take into account possible interferences from current (and unknown) firewall rules in use on WGserver or VPS.


Settings

Below are the settings sorted in an order intended to avoid loss of connectivity, both between each system to configure, and for each individual command to run. Anyway, one should have available a remote console access on VPS to cope with problems. As commands below are not saved (except wg-quick's configuration, unless one chooses to use wg directly), a remote reset would restore access to VPS.

WGserver

Supposed to be initially configured like this:

ip link set dev eth0 up
ip address add 192.168.1.2/24 dev eth0
ip route add default via 192.168.1.1
  • set as router at least on eth0 and future interfaces like wg0 (usually configured in /etc/sysctl.{conf,d/}):

    sysctl -w net.ipv4.conf.default.forwarding=1
    sysctl -w net.ipv4.conf.eth0.forwarding=1
    
  • wg-quick's changed configuration:

    [Interface]
    Address = 10.66.66.1/24
    ListenPort = 1194
    PrivateKey = dGhpcyBpcyBub3QgdGhlIGtleSA7KQ==
    Table = off
    PostUp = 
    PostDown = 
    
    [Peer]
    PublicKey = dGhpcyBpcyBub3QgdGhlIGtleSBlYXRoZXIgOyk=
    PersistentKeepalive = 25
    AllowedIPs = 0.0.0.0/0
    

    There's no NAT anymore: traffic will be just routed through the tunnel without alteration. Just ensure firewall rules let needed traffic pass and don't have an other generic NAT rule. Table = off prevents wg-quick to configure additional routing, since there will be custom routing done anyway. Since WGserver is behind router's NAT, PersistentKeepalive should be used to refresh the router's NAT association table (eg: conntrack for a Linux router).

    As any IP traffic can come through the tunnel, AllowedIPs is set to 0.0.0.0/0 to associate any IP to the peer for WireGuard's own cryptokey routing.

The tunnel must be up before applying the following settings. For integration, one could perhaps create a custom script to be run from wg-quick's PostUp/PostDown that would include the configurations to follow. Note that while routes disappear when interfaces go down, ip rule entries have more chance to stay around even when interfaces are deleted, so should be properly cleaned up to avoid duplicates and confusion later.

  • keep Internet traffic related to 44.44.44.44's tunnel itself through the router both ways and keep it from being affected by routes added in the main table at the end. Use explicit preference so that the order of commands doesn't matter:

    ip route add table 1000 192.168.1.1/32 dev eth0
    ip route add table 1000 default via 192.168.1.1 dev eth0
    
    ip rule add preference 1000 to 44.44.44.44 iif lo ipproto udp sport 1194 table 1000
    ip rule add preference 1001 from 44.44.44.44 to 192.168.1.2 iif eth0 ipproto udp dport 1194 table 1000
    
  • optional: have SSH also be routed outside of tunnel through router. Without it WGserver would ssh to server when doing ssh 44.44.44.44 rather than VPS, but could still ssh to VPS using ssh 10.66.66.2.

    ip rule add preference 1002 to 44.44.44.44 iif lo ipproto tcp dport 22 lookup 1000
    ip rule add preference 1003 from 44.44.44.44 to 192.168.1.2 iif eth0 ipproto tcp sport 22 lookup 1000
    
  • have general traffic with source 44.44.44.44 from server, be routed through the tunnel. WGserver itself is unaffected for the general case not involving 44.44.44.44 and still uses router to reach Internet:

    ip route add table 2000 default dev wg0
    
    ip rule add preference 2000 from 44.44.44.44 iif eth0 lookup 2000
    

    Note: former uses of iif eth0 are for traffic from router, while last use of iif eth0 is from server. Routing rules can't distinguish that a different gateway (router vs server) was used to arrive on the same interface (eth0), but it's not an issue because the specific port used for the tunnel was enough to distinguish cases.

  • Add a route (standard, using the main table) to reach remaining of server's 44.44.44.44 directly on the LAN (both from WGserver and for routed traffic through the tunnel):

    ip route add 44.44.44.44/32 dev eth0
    

    The home Ethernet broadcast domain will thus see two kind of local addresses: 192.168.1.0/24 and 44.44.44.44/32 (eg: ARP who has 44.44.44.44 from 192.168.1.2)

VPS

Supposed to be initially configured like this (with 44.44.44.1 as VPS' router):

ip link set dev eth0 up
ip address add 44.44.44.44/24 dev eth0
ip route add default via 44.44.44.1
  • set as router:

    sysctl -w net.ipv4.conf.default.forwarding=1
    sysctl -w net.ipv4.conf.eth0.forwarding=1
    
  • wg-quick's changed configuration:

    [Interface]
    PrivateKey = dGhpcyBpcyBub3QgdGhlIGtleSA7KQ==
    Address = 10.66.66.2/24
    Table = off
    PostUp = 
    PostDown = 
    
    [Peer]
    PublicKey = dGhpcyBpcyBub3QgdGhlIGtleSBlYXRoZXIgOyk=
    Endpoint = 33.33.33.123:1194
    PersistentKeepalive = 1000
    AllowedIPs = 10.66.66.1/32,44.44.44.44/32
    

    PersistentKeepalive here is only to be sure that peer WGserver receives an initial handshake and will then know its peer endpoint (it could have been simpler to set an endpoint on WGserver and not need one on VPS, but OP chose to do it the other way around. Many rules depend on the port 1194 and its location). Else without any initial incoming traffic, there can't be any outgoing traffic from WGserver, since it doesn't know its endpoint yet. 44.44.44.44/32 is added to the list of AllowedIPs so it's associated with the peer for WireGuard's cryptokey routing.

The tunnel must be up before applying the following (like before, PostUp/PostDown could probably be used to include the following settings).

  • move higher the local table rule to make room for rules that must bypass the local table.

    ip rule add preference 200 lookup local
    ip rule delete preference 0
    

    All rules with a preference lower than 200 can now happen before this rule that classifies relevant traffic to be local traffic: they'll be able to prevent this to happen and have 44.44.44.44 be routed elsewhere instead, despite it being a local address.

  • tunnel exception to exceptions rules coming later:

    ip rule add preference 100 from 33.33.33.123 to 44.44.44.44 iif eth0 ipproto udp sport 1194 lookup local
    ip rule add preference 101 from 44.44.44.44 to 33.33.33.123 iif lo ipproto udp dport 1194 lookup main
    
  • optional SSH exception rules to allow any Internet source to still connect to VPS when using ssh, rather than server. Optional, but not having it leaves only access through the tunnel. If the tunnel gets misconfigured, VPS access is lost.

    ip rule add preference 102 to 44.44.44.44 iif eth0 ipproto tcp dport 22 lookup local
    ip rule add preference 103 from 44.44.44.44 iif lo ipproto tcp sport 22 lookup main
    
  • add a proxy ARP entry to keep VPS answering ARP requests for 44.44.44.44 which is now routed by VPS with its own router not knowing it:

    ip neighbour add proxy 44.44.44.44 dev eth0
    

    Without this, connectivity won't work correctly anymore after the next set of rules because ARP answers are not made anymore to VPS' router (actually connectivity would sporadically work right after the VPS' router FAILED ARP entry times out while simultaneously receiving traffic from VPS, but that entry would become FAILED again a few seconds later, until the next timeout).

  • exception routes and rules for the general case: routing 44.44.44.44 instead of considering it as local traffic (some routes are just duplicated from the main routing table):

    ip route add table 150 44.44.44.0/24 dev eth0
    ip route add table 150 default via 44.44.44.1 dev eth0
    ip route add table 150 10.66.66.0/24 dev wg0 src 10.66.66.2
    ip route add table 150 44.44.44.44/32 dev wg0 src 10.66.66.2
    
    ip rule add preference 150 to 44.44.44.44 iif eth0 lookup 150
    ip rule add preference 151 from 44.44.44.44 iif wg0 lookup 150
    
  • optionally, add two routes (one copied from the local table) and insert two rules before all other rules to still allow VPS itself to connect to server's 44.44.44.44 from 10.66.66.2, should that be needed for some reason. There might be some duplication to avoid, but as it's optional...

    ip route add table 50 local 10.66.66.2 dev wg0
    ip route add table 50 44.44.44.44/32 dev wg0
    
    ip rule add preference 50 from 10.66.66.2 iif lo to 44.44.44.44 table 50
    ip rule add preference 51 from 44.44.44.44 iif wg0 to 10.66.66.2 lookup 50
    

    Example of use on VPS to reach server (once it's configured):

    ssh -b 10.66.66.2 44.44.44.44
    

    Example of use from server (once it's configured) to reach VPS:

    ssh 10.66.66.2
    
  • optionally, protect home against DDoS routed through VPS, simply by using a tc ... netem qdisc. It's set on wg0's egress while it could be more efficient (but more complex) to set it on eth0's ingress.

    For example if home can withstand an incoming bandwidth of ~ 50mbits/s through WGserver's tunnel (meaning actually some more on router):

    tc qdisc add dev wg0 root netem rate 49mbit
    

server

server is supposed to be initially configured like this:

ip link set dev eth0 up
ip address add 192.168.1.3/24 dev eth0
ip route add default via 192.168.1.1

The following changes are done:

  • add 44.44.44.44 directly associated to 192.168.1.2/32 (that's just a shortcut for an ip address + ip route command) to eth0, use WGserver as router with a preference for 44.44.44.44 source:

    ip address add 44.44.44.44 peer 192.168.1.2/32 dev eth0
    ip route delete default
    ip route add default via 192.168.1.2 src 44.44.44.44
    
  • optionally completely remove 192.168.1.3 to keep only 44.44.44.44:

    ip address delete 192.168.1.3/24 dev eth0
    
  • or instead of the previous command, set up multi-homing (with 44.44.44.44 staying the default in use):

    ip route add table 192 default via 192.168.1.1 dev eth0
    ip rule add from 192.168.1.3 iif lo lookup 192
    

    Multi-homing examples:

    • by default use source 44.44.44.44:

      curl http://ifconfig.co/
      
    • manually select source 192.168.1.3:

      curl --interface 192.168.1.3 http://ifconfig.co/
      

    If server itself is configured as a router+NAT hosting containers or VMs, a MASQUERADE iptables rule might choose the wrong outgoing address (because it always chooses the first address on an interface and 192.168.1.3/24 was set first) which will interfere with the routes in place. Always use SNAT to be sure of the result.

    LXC example, with SNAT instead of MASQUERADE (which should be changed inside the LXC-provided lxc-net scripted command):

    • general case:

      iptables -t nat -A POSTROUTING -s 10.0.3.0/24 -o eth0 -j SNAT --to-source 44.44.44.44
      
    • container with IP address 10.0.3.123/24 should be routed through home router instead:

      iptables -t nat -I POSTROUTING -s 10.0.3.123/24 -o eth0 -j SNAT --to-source 192.168.1.3
      
    • it might be needed to duplicate a route to reach 192.168.1.3 itself from such container (not that there's any reason to need this). For LXC:

      ip route add table 192 10.0.3.0/24 dev lxcbr0
      
  • optionally, if Path MTU Discovery isn't working correctly (eg: ICMP is filtered somewhere) between server and a remote client or server, directly setting the MTU to the WireGuard tunnel interface's MTU could help in some cases when an established TCP connection appears to stay frozen. This can be set per route rather than per interface, thus not affecting the optional multi-homed 192.168.1.3-source route.

    ip route delete default
    

    then either (recommended):

    ip route add default via 192.168.1.2 src 44.44.44.44 mtu 1420
    

    or to not even try PMTUD:

    ip route add default via 192.168.1.2 src 44.44.44.44 mtu lock 1420
    

Any service hosted on server will now be reachable from Internet with 44.44.44.44, except for SSH if the option was used on VPS, and any service on Internet will be reachable from server using 44.44.44.44 source address.

3
  • 1
    That is the awesome answer. I have test it and it works. Apparently my ISP stopped providing static IP addresses, so now I am figuring out how to move WG server to the VPS and put another WG client in the local network and route traffic not to WG server, but to the WG client in the local network trough WG server.
    – JohnDow
    Commented Dec 22, 2020 at 15:17
  • There's no WG server or client: they are peers. You can just reverse the location of the fixed port: have it fixed on the VPS instead of home. This would simplify things as you don't need to port forward 1194 anymore. This would also require changing many of the routing rules so they match the new configuration (including but not only: flipping sport to dport and dport to sport). You also would have to remove the home's public IP reference in rules, so that a whole port is dedicated for WG from anywhere. Roaming would then be available again for home.
    – A.B
    Commented Dec 22, 2020 at 15:20
  • The iptables SNAT line is a gem. If your server has multiple IPv6 addresses you easily assign each client their own on the endpoint. So you can do the typical setup where your clients share the hosts primary IPv4 as normal. But then you assign each client their own outbound IPv6 with the one liner. This probably makes the most sense for most network configurations. As an ip6tables rule it just looks like ip6tables -t nat -A POSTROUTING -s WG_PEER_IPV6_LOCAL_ADDR -o eth0 -j SNAT --to-source IPV6_PUBLIC_ADDR
    – Justin
    Commented Jan 11 at 5:11

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .