2
Context

I hope the following diagram makes it clear, ask if you still need any answer.

enter image description here

Basically, the cronjob itself (a container) will run every hour, that needs to use a long running redis service. (The redis service acts as an interface to some other worker containers, but the cronjob will not connect directly to them). Everything running inside the same Ubuntu instance with docker set up.

Initial Idea

I have the docker compose file that can fire up the workers and the redis service. But can I put an hourly cronjob running on a container (not a long running service) in the same file, to take advantage of the docker-compose networking? In that case, I don't have to expose 6379 to the localhost, am I correct?

Alternatively, if I handle the worker and redis service inside compose, can I make the cronjob container run inside the same network without exposing/publishing post 6379?

Some ideas and options available in docker's networking/compose stack around this scenario will be appreciated.

And I don't have a K8S cluster available, and it's not feasible. So have to keep the solution within standalone docker stack.

Further Clarifications

Based on @ajay's helpful answer, I decided to add some more clarifications on how I am envisioning the docker-compose and the cron schedule.

services:
  redis: # This becomes the hostname
    image: "redis:alpine" # port 6379 of this runs the service. 

  workers:
    # The workers must not be directly visible to the cronjob. 
    # But each worker container must access redis:6379
    image: della/worker # From Dockerhub registry
    depends_on:
      - redis
    deploy:
      replicas: 5 # Scale out 

The Cron schedule looks like this

# The cronjob_container must be able to reach redis:6379
# Takes ~5 minutes to run. 
@hourly docker container start cronjob_container 

The cronjob_container cannot be part of the docker-compose because it's not intended as a long running service. But it needs to be part of the network defined by docker compose.

However, if the compose file allows a cronjob instead of a service, something akin to Kubernetes CronJob resource, that would be the ideal solution.

3
  • If you can't use k8s on a single node, how about dokku? Can you clarify why k8s on the node is not practical?
    – chicks
    Commented Jul 9 at 20:22
  • 1. I have an ec2 instance, the workload is not that high warranting K8S, although I can fire up a minikube cluster (the only way I have used K8s before). 2. The docker images rest in Google Artefact repo. Fetching them in the minikube pods? I am not sure how to do it (although there must be ways). I am not a DevOps, but ML engineer, so trying to find my way around situation. The options are based on tools I understand and confident with, rather than what is objectively the best.
    – Della
    Commented Jul 10 at 2:26
  • dokku.com is easier to get started with than k8s. K8s requires an ongoing investment in maintenance and updates so not choosing to have that much complexity is completely reasonable.
    – chicks
    Commented Jul 10 at 2:29

1 Answer 1

1

You can connect to your Redis container port without exposing it and calling it through localhost if they are both in the same network. You just have to call the container name along with the port number as the URL in your Cronjob container and you will be able to achieve this.

Since you haven't provided any details of your docker-compose file, I will give you an example:

version: '3.8'

services:
  redis:
    image: my-redis-image
    container_name: myredis
    expose:
      - "6379"

  cronjob:
    image: my-cronjob-image
    container_name: mycronjob
    environment:
      - URL=http://myredis:6379

networks:
  default:
    driver: bridge

I have passed the container URL through as an environment variable in this docker-compose file. You can change that and give it in your Dockerfile or within your image. The containers are in the same network so you can connect to the Redis container without mapping it to the localhost.


Edit:

If you cannot include the cronjob container in the docker-compose file, then you can run the container under the same network used by the Redis container and use the URL http://myredis:6379. Check the containers are in the same network using this command:

 docker container inspect --format '{{range $net,$v := .NetworkSettings.Networks}}{{printf "%s\n" $net}}{{end}}' <container_name>

Replace the container_name with your Redis container and the Cronjob container. If they are not under the same network, run your cronjob using this command:

docker run -d --name mycronjob --network <network_name> my-cronjob-image

Replace the network_name with the Redis container's network.

4
  • Thanks a lot, I will test it, seems expose-ing the 6379 may solve the problem. But the cronjob cannot be part of the compose file, as it is not a long running service. The cronjob container is intended to start every hour, execute its CMD ["./run_manager.sh"], the last line of the Dockerfile and terminate (taking about ~5 minutes). So basically, a container outside the compose file must be able to reach myredis:6379 Also, what do the last three lines (network options) mean?
    – Della
    Commented Jul 8 at 7:28
  • I added some more details in the question as well.
    – Della
    Commented Jul 8 at 7:39
  • @Della The three lines define a network that will be used for the containers. If you didn't provide that, the docker-compose file will create one for you
    – Ajay
    Commented Jul 8 at 10:55
  • @Della I have edited my answer. Let me know what works for you.
    – Ajay
    Commented Jul 8 at 11:11

Not the answer you're looking for? Browse other questions tagged or ask your own question.