1

I have two Docker containers running on two different hosts:

  • Computer A (PC, uses Ethernet), ip 192.168.0.11 [ Docker container running inside on 172.17.0.2 ], OS Windows 7
  • Computer B (laptop, uses WLAN), ip 192.168.0.12 [ Docker container running inside on 172.17.0.2 ], OS Linux Ubuntu

I would like to somehow connect them, so they could communicate with each other (e.g. via ssh).

I have tried to "link" my containers by running them with:

docker run --name master_node --rm -it branislava/ubuntu1504java8:v1

docker run --rm -it --link master_node:master_node1 --name slave_node branislava/ubuntu1504java8:v1

As suggested in comments. This is working; containers can communicate - but only when they are run on the same host machine.

How could this be accomplished for containers that are running on different machines in the same local network?

6
  • Yes, it would be possible. Is that all you wanted to know?
    – techraf
    Commented Nov 2, 2016 at 12:50
  • ...Thank God :) I hoped it would be possible, but I still do not know how. I have changed my question. Commented Nov 2, 2016 at 12:51
  • 1
    Thank you (do not know who exactly, though) for down voting my question. But remember that sometimes one simply does not know what to google. It really did not come to my mind to search for "opening container to outside world" or what was exactly the title of the instructions. Commented Nov 2, 2016 at 13:35
  • Voting system on StackExchange is used to differentiate good questions from bad ones. The fact that you "don't know what to google" does not make this question any better. The question is too broad in this form and it will likely get closed sooner or later if you don't improve it.
    – techraf
    Commented Nov 2, 2016 at 14:10
  • 1
    Ok, thank you for your reply. I will think about how to improve it, but whenever I am too precise, the question becomes "too specific" and then gets closed too. I guess either I am really untalented to understand the stack-* question policy or the problem is my problems. Commented Nov 2, 2016 at 17:18

4 Answers 4

1

Following is the bash script for establishing communication between docker containers of the same image across different hosts in the same local network:

 # Firstly, we have to make a swarm
 # We have to discover ip address of the swarm leader (i.e. manager, master node)
 ifconfig
 # My master computer has ip 192.168.0.12 in the local network
 sudo docker swarm init --advertise-addr 192.168.0.12
 # Info about swarm
 sudo docker info
 # info about nodes (currently only one)
 sudo docker node ls
 # hostaname got it name automatically, by the host's name

 # adding new nodes to the swarm
 # ssh to the slave node (i.e. worker node) or type physically:
 # this command was generated after 
 # sudo docker swarm init --advertise-addr 192.168.0.12
 # on manager computer
 docker swarm join --token SWMTKN-1-55b12pdctfnvr1wd4idsuzwx34vcjwv9589azdgi0srgr3626q-01zjw639dyoy1ccgpiwcouqlk  192.168.0.12:2377

 # if one cannot remember or find this command
 # should type again on the manager host
 docker swarm join-token worker

 # ssh to the manager or type in directly:
 # (listing existing nodes in a swarm)
 sudo docker node ls
 # adding docker image as a process that will run in this containers
 # ssh to the manager or type in directly:
 # replicas will be the number of nodes
 # manager is also a worker node
 sudo docker service create --replicas 2 --name master_node image-name sleep infinity
 # I have to enter inside of the containers and set up some things before
 # running application, so I say `sleep infinity`
 # Else, this is not necessary.

 # what's up with the running process
 sudo docker service inspect --pretty etdo0z8o8timbsdmn3qdv381i
 # or
 sudo docker service inspect master_node
 # also, but only from manager
 sudo docker service ps master_node
 # see running containers (from worker or from manager)
 sudo docker ps

#  promote node from worker to manager
# `default` is the name of my worker node
sudo docker node promote default
#  denote node from manager to worker
sudo docker node demote default

# entering container, if needed
# getting container id with `sudo docker ps`
sudo docker exec -it bb923e379cbd  bash

# retrieving ip and port of the container
# I need this since my containers are communicating via ssh
sudo docker ps
sudo docker inspect --format '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' 5395cdd22c44

# removing processes from all nodes
sudo docker service rm master_node
# this command should say `no process`
sudo docker service inspect master_node

Hopefully, someone will find this helpful.

0

show us the docker command.

i think in your command is missing the link

here is an example

docker run --name=redmine -it --rm --link=postgresql-redmine:postgresql \
  --volume=/srv/docker/redmine/redmine:/home/redmine/data \
  sameersbn/redmine:3.3.1
5
  • I run it with sudo docker run -e TERM --rm -it --entrypoint=//bin/bash branislava/ubuntu1504java8:v1 If I run both containers the way you wrote, how would I ssh between them? Using 192.168.0.* ? I am not sure what would happen if I had some other running containers on the same computer, too. Commented Nov 2, 2016 at 12:58
  • try it so 'docker run --name=CLIENT -it --rm --link=HOST'
    – Eros
    Commented Nov 2, 2016 at 13:02
  • of course you must open the ports 22 on both
    – Eros
    Commented Nov 2, 2016 at 13:03
  • Thank you, I will try it out, I just have to figure out firstly what to state as CLIENT and what as HOST. It really did not came up to my mind that it would be so simple (just the right option, --link). I expected something much more complicated. :) Commented Nov 2, 2016 at 13:37
  • thank you for your answer, this is working, but when containers are run on the same host machine. I would like to communicate between containers on different hosts, but in the same network. Do you know how could this be manageable? Commented Nov 3, 2016 at 8:57
0

i have never try it but so you can to accomplish a other pc

docker run --name=Client -it --rm \
      --env='MEMCACHE_HOST=192.168.1.12' --env='MEMCACHE_PORT=22' \
1
  • It says it cannot find master_node :( I've run it with docker run --name master_node --rm -it branislava/ubuntu1504java8:v1 and docker run --rm -it --link master_node:master_node1 --name slave_node --env='MEMCACHE_HOST=192.168.0.12' --env='MEMCACHE_PORT=22' branislava/ubuntu1504java8:v1, where master_node is container running on 192.168.0.12 Commented Nov 4, 2016 at 7:17
0

I know this is old post.If ssh on containers is set up and port 22 is opened,you need to find out mapped port for external connections.This is not a solution but you can try out

Run this command on laptop B

  docker port master_node1 22

if the port is open it will give you some port number other wise errors out,eg if output of above command is

          0.0.0.0:1234

ssh from laptop A to container on laptop B

      ssh [email protected] -p 1234

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .