3

I'm trying to dockerize a vpn-over-vpn setup that I've been using to bridge two lan segments over another VPN connection, but I'm running into a roadblock that seems like there should be a solution for.

VLAN1 <-> USB-Ethernet adapter <-> [ tincd (Mode: switch) <-> openconnect ] <-> LAN <-> {internet} <-> tincd (remote) <-> VLAN1 (remote)

The way I have this working without a docker container is that tincd essentially is configured to connect to an IP address that can only be reached when the openconnect VPN is established. And I have a bridge "vpn-bridge" that bridges the USB-Ethernet adapter as well as the tincd daemon, and an iptables rule that allows accepting and forwarding from/to the bridge. This works great, devices can see and talk to each other as if they are on the same LAN.

So I created a docker container. I have the openconnect client running inside the container and successfully connected to the remote network, and I have the tincd daemon running inside the container as well, which means it is able to establish the VPN on top of the VPN. Because I don't want the openconnect changes to routing etc to affect the host, I am using a vpcbr network for that container.

Now, in my tinc-up script I tell tincd to add the interface it created to my bridge, which contains the USB Ethernet adapter, which allows it to accept/forward the traffic. This is the relevant part in the tinc-up script:

ifconfig "$INTERFACE" "0.0.0.0"
brctl addif vpn-bridge "$INTERFACE"
ifconfig "$INTERFACE" up
iptables -A FORWARD -i vpn-bridge -o vpn-bridge -j ACCEPT

Doing this directly on the host is easy, the bridge already exists and the bridge is already linked with the USB Ethernet adapter. But inside the docker container this bridge does not exist, and even if I created it, I wouldn't be able to link it with the USB Ethernet interface, which is not directly accessible from inside the container. I don't want the container to use "host" networking either, as I don't want the openconnect routing changes to affect other things running on the host.

I couldn't find a way to expose the existing bridge on the host to the docker container, so that the brctl addif done by tincd can work. Is there a way to do this somehow? Or is this something that ipvlan somehow solves? I am having a hard time finding examples for ipvlan or a way to use them with docker-compose.

1 Answer 1

3

After trying a bunch of things I finally got it to (mostly) work. Turns out ipvlan is not the correct driver and this can be done with macvlan in passthru mode. Here's the relevant excerpt from my docker-compose.yml:

networks:
  main:
    driver: bridge
    ipam:
      config:
        - subnet: 172.21.8.0/24
  vlan:
    driver: macvlan
    driver_opts:
      parent: eno1.9
      macvlan_mode: passthru
    ipam:
      config:
        - subnet: 172.21.9.0/24

I found that I had to name my main network alphabetically before vlan, otherwise mapping ports to other daemons running in the containers didn't work. Also, specifically for openconnect I had to write a custom script to filter the address ranges pushed for split tunnel to exclude the 172.16.0.0/12 range, which caused connections to daemons running inside the container to be routed over the VPN inadvertently.

Another problem was that I couldn't use the USB-Ethernet dongle because the device name enxAABBCCDDEEFF was too long, preventing me from adding the vlan tag. I couldn't find a separate macvlan driver option to specify the tag. I ended up just using the main network interface eno1 and tagging the traffic that way.

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .