2

I am planning to move docker-compose to docker-swarm for multi nodes.

I have used docker-compose like this below

version: '3'
services:
  python:
    container_name: python
    build: ./python
    command: uwsgi --socket :8001 --module myapp.wsgi --py-autoreload 1 --logto /tmp/mylog.log
    volumes: 
      - ./src:/code
      - ./src/static:/static
    ports:
      - "8082:8082"
    expose:
      - "8001"
  nginx:
    image: nginx:1.13
    container_name: nginx
    ports:
      - "8000:8000"
    volumes:
      - ./nginx/conf:/etc/nginx/conf.d
      - ./nginx/uwsgi_params:/etc/nginx/uwsgi_params
      - ./static:/static
    depends_on:
      - python

it works well for django host.

then now I am trying to use docker-swarm.

docker-compose and docker-swarm looks alike, but there is one question.

I mounted the local drives like this

  - ./src:/code
  - ./src/static:/static 

  or here

  - ./nginx/conf:/etc/nginx/conf.d
  - ./nginx/uwsgi_params:/etc/nginx/uwsgi_params
  - ./static:/static

However in docker-swarm, volumes like this doesn't work and service never starts.

(I see... it is understandable, because there might be many nodes..)

I think I should do some changes.

I googled around and found out there is some settings like this .

services:
  python:
    volumes:
      - src:/code

volumes:
  src:
    driver: local

However I don't understand how to use this ...

Where should I put my source code or some config file ??

What is the best practice for docker-swarm common used file ??

1 Answer 1

4

If you are deploying an image across multiple nodes in swarm mode, the source code should be inside of your image, and that image should be pushed to a registry. Mounting source code as a volume is a method to speed up the development process, but the image being tested should be built with that source code included with a COPY command in your Dockerfile to be usable without the volume mounts.

Therefore the change to your compose file is to add image names pointing to your own repository, preferribily with versioned tags. And then each image will have a Dockerfile and be built and pushed to the registry before deploying the stack.

Note that container_name, depends_on, and build are not valid in swarm mode. You'll need to remove any dependencies to a hardcoded container name (use the service name for DNS based discovery). Dependencies should be handled ideally by the application with a test of connectivity with some kind of exponential backoff up to a max limit, or by adding something like the wait-for-it.sh script to the entrypoint to verify dependencies are available before starting the application. Builds are typically moved out of the compose file and into a CI/CD system. The compose file then receives the current tag name as a variable from that CI/CD tooling.

For persistent data, you would use a volume, but that volume should be on a network filesystem like NFS, rather than the default local filesystem, if you want the data to be persistent when containers migrate between nodes. An example of this is in my answer here.

For sharing data between containers in a swarm cluster, ideally that is done with network APIs (e.g. all the REST based microservices). You could also externalize that to a database that is either running outside of swarm or designed for a container environment (CNCF has some in their landscape). You could do this with the same NFS solution and a volume mount, but realize that you may be dealing with file locking and higher latency.

3
  • Thank you for your great explanation. Finally I think I understand the basic idea of docker-swarm. I made the image including some conf files and source code, it works well. However I have one question. Almost all scripts like source code or conf file is same and in image though, in my case. db server is ou side and it is different for production ,staging (and develop). For now it is hard coded in django settings.py. like db 'my-live-db' 'my-dev-db' I need to make two images for only this point??
    – whitebear
    Commented Mar 21, 2020 at 21:45
  • And also need to be stored uploaded file or generated file. These should be kept perpetual. How can I store these files...??
    – whitebear
    Commented Mar 21, 2020 at 23:11
  • 1
    Code/binaries/libraries/dependencies goes in the image. Data goes in volumes. Configuration and secrets are injected as docker configs and secrets, single file volume mounts in local environments (docker-compose), environment variables, or command line arguments.
    – BMitch
    Commented Mar 22, 2020 at 1:23

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .