103

Using ubuntu tusty, there is a service running on a remote machine, that I can access via port forwarding through an ssh tunnel from localhost:9999.

I have a docker container running. I need to access that remote service via the host's tunnel, from within the container.

I tried tunneling from the container to the host with -L 9000:host-ip:9999 , then accessing the service through 127.0.0.1:9000 from within the container fails to connect. To check wether the port mapping was on, I tried nc -luv -p 9999 # at host nc -luv -p 9000 # at container

following this, parag. 2 but there was no perceived communication, even when doing nc -luv host-ip -p 9000 at the container

I also tried mapping the ports via docker run -p 9999:9000 , but this reports that the bind failed because the host port is already in use (from the host tunnel to the remote machine, presumably).

So my questions are

1 - How will I achieve the connection? Do I need to setup an ssh tunnel to the host, or can this be achieved with the docker port mapping alone?

2 - What's a quick way to test that the connection is up? Via bash, preferably.

Thanks.

1
  • host.docker.internal is what you are looking for right? Commented Jan 24, 2022 at 4:52

9 Answers 9

100

Using your hosts network as network for your containers via --net=host or in docker-compose via network_mode: host is one option but this has the unwanted side effect that (a) you now expose the container ports in your host system and (b) that you cannot connect to those containers anymore that are not mapped to your host network.

In your case, a quick and cleaner solution would be to make your ssh tunnel "available" to your docker containers (e.g. by binding ssh to the docker0 bridge) instead of exposing your docker containers in your host environment (as suggested in the accepted answer).

Setting up the tunnel:

For this to work, retrieve the ip your docker0 bridge is using via:

ifconfig

you will see something like this:

docker0   Link encap:Ethernet  HWaddr 03:41:4a:26:b7:31  
          inet addr:172.17.0.1  Bcast:172.17.255.255  Mask:255.255.0.0

Now you need to tell ssh to bind to this ip to listen for traffic directed towards port 9000 via

ssh -L 172.17.0.1:9000:host-ip:9999

Without setting the bind_address, :9000 would only be available to your host's loopback interface and not per se to your docker containers.

Side note: You could also bind your tunnel to 0.0.0.0, which will make ssh listen to all interfaces.

Setting up your application:

In your containerized application use the same docker0 ip to connect to the server: 172.17.0.1:9000. Now traffic being routed through your docker0 bridge will also reach your ssh tunnel :)

For example, if you have a "DOT.NET Core" application that needs to connect to a remote db located at :9000, your "ConnectionString" would contain "server=172.17.0.1,9000;.

Forwarding multiple connections:

When dealing with multiple outgoing connections (e.g. a docker container needs to connect to multiple remote DB's via tunnel), several valid techniques exist but an easy and straightforward way is to simply create multiple tunnels listening to traffic arriving at different docker0 bridge ports.

Within your ssh tunnel command (ssh -L [bind_address:]port:host:hostport] [user@]hostname), the port part of the bind_address does not have to match the hostport of the host and, therefore, can be freely chosen by you. So within your docker containers just channel the traffic to different ports of your docker0 bridge and then create several ssh tunnel commands (one for each port you are listening to) that intercept data at these ports and then forward it to the different hosts and hostports of your choice.

11
  • 13
    This one should be the accepted answer. --net=host has indeed unwanted side effects...
    – hlobit
    Commented Jul 12, 2019 at 16:34
  • 8
    Sadly there is no docker0 on MacOS docs.docker.com/docker-for-mac/networking/…
    – Davos
    Commented Jul 31, 2019 at 13:09
  • 3
    The page you referenced also has a section "I want to connect from a container to a service on the host" which mentions the special DNS name "host.docker.internal" to resolve the host IP (mentioning also that it will not work in production). Some guys also pass the local IP via environment variables. Maybe you find other answers on SO that cover this topic in greater detail.
    – Felix K.
    Commented Aug 28, 2019 at 11:51
  • 3
    This is awesome! I've been chasing my tail on this for THREE DAYS! Thank you! Binding the ssh tunnel to 0.0.0.0 fixed it. It was originally omitted, likely defaulting to 127.0.0.1.
    – Brandon
    Commented Feb 22, 2020 at 0:17
  • 3
    Same as @Brandon I feel a strong random stranger on the internet love towards you. Thanks so much for your answer!!
    – Snowball
    Commented May 20, 2020 at 6:40
41

on MacOS (tested in v19.03.2),

1) create a tunnel on host

ssh -i key.pem username@jump_server -L 3336:mysql_host:3306 -N

2) from container, you can use host.docker.internal or docker.for.mac.localhost or docker.for.mac.host.internal to reference host.

example,

mysql -h host.docker.internal -P 3336 -u admin -p

note from docker-for-mac official doc

I WANT TO CONNECT FROM A CONTAINER TO A SERVICE ON THE HOST

The host has a changing IP address (or none if you have no network access). From 18.03 onwards our recommendation is to connect to the special DNS name host.docker.internal, which resolves to the internal IP address used by the host. This is for development purpose and will not work in a production environment outside of Docker Desktop for Mac.

The gateway is also reachable as gateway.docker.internal.

2
  • 3
    Wow this is so cool! Confirmed working on Docker 18.09.2 as well. Didn't know you can do that. I assume this will allow you to access any port your laptop's localhost, which really opens the door to a lot of resources.
    – Shawn
    Commented Oct 26, 2019 at 0:43
  • 3
    For Mac users this really is a life saver, since neither docker0 nor --net=host are supported. Thanks!
    – nik
    Commented Sep 2, 2020 at 7:10
19

I think you can do it by adding --net=host to your docker run. But see also this question: Forward host port to docker container

5
  • Thanks. I can connect to the service now. Is there a quick way to check that the connection is indeed up, tough?
    – npit
    Commented Aug 26, 2016 at 12:03
  • 1
    you can actually use curl curl {ip}:{port}/randomendpoint or wget {ip}:{port}/randomendpoint Commented Oct 28, 2016 at 17:27
  • I don't think this will work in Mac OS, since there the docker daemon is actually on a vm: forums.docker.com/t/should-docker-run-net-host-work/14215/26 Commented Apr 24, 2020 at 21:21
  • @MatrixManAtYrService is there a solution for Docker on Mac OS? Commented May 19, 2020 at 16:30
  • So far as I know, not one that will work easily in all cases. For me, I just switched to from connecting to the host tunnel to having the container OS set up the tunnel itself. You could also try to figure out the networking between the VM and MacOS. Commented May 20, 2020 at 2:06
5

I'd like to share my solution to this. My case was as follows: I had a PostgreSQL SSH tunnel on my host and I needed one of my containers from the stack to connect to a database through it.

I spent hours trying to find a solution (Ubuntu + Docker 19.03) and I failed. Instead of doing voodoo magic with iptables, doing modifications to the settings of the Docker engine itself I came up with a solution and was shocked I didn't thought of this earlier. The most important thing was I didn't want to use the host mode: security first.

Instead of trying to allow a container to talk to the host, I simply added another service to the stack, which would create the tunnel, so other containers could talk to easily without any hacks.

After configuring a host inside my ~/.ssh/config:

Host project-postgres-tunnel
    HostName remote.server.host
    User sshuser
    Port 2200
    ForwardAgent yes
    TCPKeepAlive yes
    ConnectTimeout 5
    ServerAliveCountMax 10
    ServerAliveInterval 15

And adding a service to the stack:

  postgres:
    image: cagataygurturk/docker-ssh-tunnel:0.0.1
    volumes:
      - $HOME/.ssh:/root/ssh:ro
    environment:
      TUNNEL_HOST: project-postgres-tunnel
      REMOTE_HOST: localhost
      LOCAL_PORT: 5432
      REMOTE_PORT: 5432
    # uncomment if you wish to access the tunnel on the host
    #ports:
    #  - 5432:5432

The PHP container started talking through the tunnel without any problems:

postgresql://user:password@postgres/db?serverVersion=11&charset=utf8

Just remember to put your public key inside that host if you haven't already:

ssh-copy-id project-postgres-tunnel

I'm pretty sure this will work regardless of the OS used (MacOS / Linux).

3
  • Could you please explain why the remote host address needs to be specified twice, once in the config file and once as environment variable? Commented Mar 2, 2020 at 13:43
  • I'm not sure what is where, according to you, specified twice.
    – Mike Doe
    Commented Mar 2, 2020 at 13:45
  • @emix, where are you grabbing the cagataygurturk/docker-ssh-tunnel:0.0.1 image from? I know they have a Github repo, but where is the image stored?
    – Daniel
    Commented Aug 17, 2022 at 16:04
4

I agree with @hlobit that @B12Toaster answer should be the accepted answer.

In case anyone hits this problem but with a slightly different setup with the SSH tunnel, here are my findings. In my case, instead of creating a tunnel from Docker host machine to remote machine using ssh -L, I was creating remote forward SSH tunnel from remote machine to Docker host machine using ssh -L.

In this setup, by default sshd does NOT allow gateway ports, i.e. in file /etc/ssh/sshd_config on Docker host, the GatewayPorts no should be uncommented and set to GatewayPorts yes or GatewayPorts clientspecified. I configured GatewayPorts clientspecified and configured the remote forward SSH tunnel by ssh -L 172.17.0.1:dockerHostPort:localhost:sshClientPort user@dockerHost. Remember to restart sshd after changing /etc/ssh/sshd_config (sudo systemctl restart sshd).

Your Docker container should be able to connect to Docker host on 172.17.0.1:dockerHostPort and this in turn gets tunnelled back to SSH client's localhost:sshClientPort.

References:

1

On my side, running Docker in Windows Subsystem for Linux (WSL v1), I couldn't use docker0 connection approach. host.docker.internal also doesn't resolve (latest docker version).

However, I found out I could directly use the host-ip insider my docker container.

  1. Get your Host IP (Windows cmd: ipconfig), e.g. 192.168.0.5
  2. Bash into your Container and test if you can ping your host ip:
    - docker exec -it d6b4be5b20f7 /bin/bash
    - apt-get update && apt-get install iputils-ping
    - ping 192.168.0.5
PING 192.168.0.5  (192.168.0.5) 56(84) bytes of data.
64 bytes from 192.168.0.5 : icmp_seq=1 ttl=37 time=2.17 ms
64 bytes from 192.168.0.5 : icmp_seq=2 ttl=37 time=1.44 ms
64 bytes from 192.168.0.5 : icmp_seq=3 ttl=37 time=1.68 ms

Apparently, in Windows, you can directly connect from within containers to the host using the official host ip.

1

My 2 cents for Ubuntu 18.04 - a very simple answer, no need for extra tunnels, extra containers, extra docker options or exposing host.

Simply, when creating a reverse tunnel make sure ssh binds to all interfaces as, by default, it binds ports of the reverse tunnel to localhost only. For example, in putty make sure that option Connection->SSH->Tunnels Remote ports do the same (SSH-2 only) is ticked. This is more or less equivalent to specifying the binding address 0.0.0.0 for the remote part of the tunnel (more details here):

-R [bind_address:]port:host:hostport

However, this did not work for me unless I allowed the GatewayPorts option in my sshd server configuration. Many thanks to Stefan Seidel for his great answer.

In short: (1) you bind the reverse tunnel to 0.0.0.0, (2) you let the sshd server to accept such tunnels.

Once this is done I can access my remote server from my docker containers via the docker gateway 172.17.0.1 and port bound to the host.

0

In case anyone needs it (like I did), solution for Windows and WSL is same as @prayagupd mentioned for Mac OS

Create an SSH tunnel to your remote service with whatever tool you prefer to whatever port you prefer, for example 3300.

Then, from Docker container you can connect to, for example, MySQL DB on tunnel port 3300 using following command:

mysql -u user -p -h host.docker.internal -P 3300
0

An easy example to reproduce the situation and ssh to host

  1. Run a container. Use --network="host
docker container run --network="host" --interactive --tty --rm ubuntu bash
  1. Now you can access your host using localhost
    Now your host machine is a Linux machine that has a public-private key file to ssh into it. So copy the contents of your private key file and reproduce the key file inside your host. (However, this is just a demonstration. This is not a good way to copy key files)
  2. Now ssh into your host. Use localhost to access it.
ssh -i key_file.pem ec2-user@localhost

Not the answer you're looking for? Browse other questions tagged or ask your own question.