4

I have a webserver on my MacBook in my home network behind a NAT, serving on port 80. I also have a publicly accessible server running Ubuntu, from which I want to access my local webserver, so I open a remote SSH tunnel:

ssh -fnNT -R 8080:localhost:80 remote-host

It works. On the remote machine, I can curl localhost:8080 and get the expected answer.

But when I try this from inside a docker container, for example:

docker run -it --add-host host.docker.internal:host-gateway ubuntu:latest /bin/bash

# curl -vvv host.docker.internal:8080
* connect to 172.17.0.1 port 8080 failed: Connection timed out

"Solutions" I found elsewhere, like using the --network host docker option or having the tunnel listen on all interfaces (ssh -R 0.0.0.0:8080:localhost:80) are not an option due to security concerns.

Accessing any other port on the host system from inside the container is not a problem, like curl host.docker.internal:80 results in a response from the hosts' Caddy server.

I tried to set a firewall rule iptables -I INPUT -i docker0 -j ACCEPT (without really understanding what this does) but this changes nothing.

I tried making sure packet forwarding for IPv6 is enabled (net.ipv6.conf.all.forwarding=1 in /etc/sysctl.conf) and also to have the tunnel only bind to the IPv4 address (using the -4 option), but no luck.

I tried to use the IP of the docker0 interface as a bind_address for the tunnel (ssh -R 172.17.0.1:8080:localhost:80, having of course set GatewayPorts clientspecified in sshd_config). I can then curl 172.17.0.1:8080 from the host system successfully, but still not from inside the container.

A possible complication is that I'm using ufw on the server, allowing in only traffic on ports 80, 443 and my SSH port. When I sudo ufw disable, the above curl requests terminates with Connection refused instead of timed out, which I found interesting.

I feel like I'm close. Maybe there is an iptables filter that I can set to make this work? I don't have any experience with iptables. How would a request from inside a container to a tunneled port on the hostsystem be classified, is it going in, or out? Any other ideas to debug this problem?

1 Answer 1

3

After a 2 week deep dive into networks in general, docker networking in particular and how docker interacts with iptables, I now feel confident to answer my own question:

First of all, this has nothing to do with ssh tunnels. Anything you bind to the loopback address 127.0.0.1 will only be accessible from the local machine itself, and cannot be accessed from outside the machine's network interface. Docker here counts as "outside" because it has its own network (by default in the 172.16.0.0/12 range). This is a crucial part of the whole isolation concept of docker, it keeps the containers separate from the host and each other.

So how to make this work? There are a few options:

  • Bind whatever service you need to the docker gateway ip:

    ssh -fnNT -R 172.17.0.1:8080:localhost:80 remote-host
    

    For this to work you need an additional iptables rule, something like:

    iptables -I INPUT -i docker0 -d 172.17.0.1 -p tcp --dport 8080 -j ACCEPT
    

    This allows traffic coming from the default docker0 network to the docker gateway ip on port 8080. Without this rule, these packets would be dropped.

  • A bit easier, just bind it to all interfaces:

    ssh -fnNT -R 0.0.0.0:8080:localhost:80 remote-host
    

    The security concerns here probably depend on your individual use case and network architecture. But if you have a simple VPS with a public IP and your iptables INPUT policy is DROP, this is not a problem. I was confused here because when you publish a port with docker like -p 8080:80, docker will create an iptables rule to ACCEPT requests from outside the host, because it is insecure by default. If you bind any other (non-docker) service to a port on 0.0.0.0, it will still be blocked until you open it manually.

  • Tunnel into the container, using the host as a jump proxy. This requires your container to have an openssh server running, which is a bit tricky in itself, and the port needs to be published on the host too, for example with -p 127.0.0.1:2222:22. If you're using docker compose and don't mind starting another service, you could use docker-openssh-server for this.

    Then it's just a matter of:

    ssh -fnNT -R 0.0.0.0:8080:localhost:80 remote-container
    

    When combined with a setup in .ssh/config, for example

    Host remote-host
        HostName remote-host.example.com
        User peter
        Port 22
    
    Host remote-container
        ProxyJump remote-host
        HostName localhost
        User containerpeter
        Port 2222
    

    this will first ssh into your remote host, then from there open the remote tunnel directly into the container. You could of course open the container ssh port to the public and tunnel into there directly, but it's so much more fun with a jump.

  • You could meddle with the iptables PREROUTING chain in the nat table as suggested here, routing the requests with destination 172.17.0.1 to 127.0.0.1. I haven't tried it though, so can't guarantee for the result.

  • You could use a tool called socat as explained here, which can do TCP forwarding among many other things. Also have not tried this approach.

If you're now excited to jump into solving a similar issue yourself, here are some handy tools.

See wich processes listen to which port on which interface. One of:

lsof -i -P -n | grep LISTEN
netstat -tulpn | grep LISTEN
ss -tulpn | grep LISTEN

Trace requests through the iptables chains:

# First insert a rule with target TRACE into the INPUT chain 
iptables -I INPUT -p tcp --dport 8080 -j TRACE

# Then look at the packets while they are going through the chains
xtables-monitor --trace

Happy networking.

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .