I have a small docker swarm running in my office: one 40-core (128GB RAM) and two 8-core (16GB RAM each). When I deploy a service across the swarm, the jobs are running, but they are spread evenly without regard to per-machine capacity.
I started the swarm on the manager with:
docker swarm init
docker swarm update --task-history-limit 2
and on each node:
docker swarm join --token <token-string> <ipaddr:port>
Then I start a service with:
docker service create --detach \
--mount type=bind,src=/s/mypath,dst=/home/mypath \
--entrypoint "/home/mypath/myscript.sh arg1 arg2" \
--name "mystuff" -w /home/mypath myregistry.me.com:5433/myimage
The process works individually. I have not found an indication of assignment weighting or affinity based on node strength.
Ideally, I'd like to be able to say something like one of these:
- join the swarm, take no more than
n
tasks (a bit naïve) - join the swarm, weight my (cpu-)capacity as
0.2
(or5
on the larger ones) - start this service, assign no more than one task per available core
I'm self-regulating the overall scale of the service with docker service scale
, but that doesn't provide any granularity. Is it possible to regulate docker swarm services per node by available resources?
(This may be even more incentive to switch to k8s, which I'm assuming provides functionality along these lines. There are growing pains with learning and the transition that I've been stiff-arming.)