10

This question is related to my previous one - but a bit more descriptive. I am trying to build automate the generation and deployment of a docker container. The docker container is created with packer. After the creation of the docker image, I am using packer's post processing modules to upload it to the AWS ECR:

{
    "type": "docker-push",
    "ecr_login": true,
    "aws_access_key": "<snip>",
    "aws_secret_key": "<snipe>",
    "login_server": "https://<snip>.dkr.ecr.eu-west-1.amazonaws.com/"
}

This works just fine. The docker image got uploaded to my repository in the ECR. Now, the next step is where I am stuck. I want to run start the container on AWS.

I am thinking a little script that uses the aws-cli commands could be suited for that. I do not want to use the web ui - and I am sure this can be done over the command line, especially when someone wants to integrate this into a pipeline of some sort.

However, I have a few questions, as I am not sure if I understand the big picture correctly. My docker image is available over the ECR. To run it, I need some base instance, right? According to a reply in the previous thread, the are various way of running docker containers, either by using the ECS or by spinning up a EC2 instance. Since this is related to some research project where docker and virtualization is analyzed I am questioning what would be the best option for my case. Probably EC2? It's important that I can automate it so that no user interaction is required.

I read the docs and checked several tutorials but I still am not entirely sure how to everything correctly in the right order. Especially part where the environment is set up and the image is started.

I know that I can list my images with:

aws ecr list-images --repository-name prototypes.

How would I fire up a new instance and run my image on it?

If anything is unclear, please let me know so that I can update my question to be as detailed as possible.

2
  • 3
    I'm interested in doing so as well, i.e. deploy Docker containers on AWS. If this is anyway a research project with example deployments maybe there is some GitHub write-up so that a direct code contribution would be possible? Are you going to deploy a bunch of containers i.e. after deployment, what is the orchestration strategy? I.e. maybe you need is EKS?
    – Ta Mu
    Commented Dec 28, 2019 at 12:11
  • @Peter I probably can not publish the code on github, because the research project is under NDA. However I can maybe update this thread when someone provides an answer that helped me to achieve the scenario described above. I still haven't manage to solve the problem and I hope that someone who is experienced with AWS can give a descriptive explanation.
    – Kyu96
    Commented Jan 2, 2020 at 0:07

1 Answer 1

3
+50

I will give it a try and explain all required steps with examples.

Please add comments with questions and improvement suggestions if anything is not well enough explained.

The OP (original poster) refers to awscli; while I provide corresponding examples, I also discuss limitations of this approach and give examples of doing same with Python.

EC2 deployment

The purpose of this answer is to demonstrate that as OP correctly assumed, no Web UI access neither human interaction is mandatory to deploy a container on an AWS EC2 instance.

Scope and limitations

In this use case, we want to start just one container manually for demo/research purposes, therefore we explicitly decide to do no .

For a complex and possibly more realistic deployment scenario, this example is therefore not reusable. For same simplicity reasons, we will use EC2 service and create there a single virtual machine.

According to a short documentation research, there seems to be no official AMI with pre-installed Docker daemon. Thus we need to deploy some default EC2 instance and install there Docker daemon, then start the daemon.

While using a private registry, a login there is required; for simplicity purposes, I give also an example to run a container from the public Docker Hub registry.

Also, I am not going to provide a copy-paste automation script but single steps which can be easily adopted towards individual needs. In any case, intermediate results require parsing and passing operations which is then a pure programming activity concern.

A boilerplate automation could be done either with bash using awscli or with Python, using the boto3 SDK for AWS.

Python is worth to consider because you can do better processing of JSON responses you get from AWS. You can also tell Python to execute SSH login and bash commands on the remote host.

Execution plan

  • A. Identify recent Linux AMI (Amazon Machine Image) ID
  • B. Create keypair and instance (further, also referenced as VM)
  • C. Connect to the instance
  • D. Install and configure Docker daemon
  • E. Run container

A. Identify recent Linux AMI image ID

Search for a recent and up-to-date Amazon Linux image following the official documentation:

$ aws ec2 describe-images --owners amazon --filters 'Name=name,Values=amzn2-ami-hvm-2.0.????????.?-x86_64-gp2' 'Name=state,Values=available' --query 'reverse(sort_by(Images, &CreationDate))[:1].ImageId' --output text
ami-01f14919ba412de34

B. Create keypair and VM

With awscli, you would just say:

$ aws ec2 create-key-pair --key-name ec2-docker-test

However, the API sends you your public key wrapped in a JSON data message. Therefore, using Python it is way easier to process this data message to a .pem file:

import boto3
ec2 = boto3.resource('ec2')

# create a file to store the key locally
outfile = open('ec2-keypair.pem','w')

# call the boto ec2 function to create a key pair
key_pair = ec2.create_key_pair(KeyName='ec2-docker-test')

# capture the key and store it in a file
KeyPairOut = str(key_pair.key_material)
print(KeyPairOut)
outfile.write(KeyPairOut)

With the AMI ID we have got previously we can now create the VM:

instances = self.ec2r.create_instances(
                 ImageId=AMI_ID,  
                 MinCount=1,
                 MaxCount=1,
                 InstanceType=instance_type,
                 KeyName='ec2-docker-test'
             )

AWS API allows also lookup of instance state and IP. With little bit more Python/boto3 stanzas you can acquire this data to know when you can proceed.

Note: Firewall consideration

To configure firewall access, please create a security group and link it to machines's VPC. For readability reasons, example is not included.

C. Test SSH connection

As soon as the instance is up and running, you can do everything you like by means of remote command execution.

In a Bash script, you would use ssh only or scp to upload a local bash script and run it remotely.

 ssh -t <REMOTE_IP> -i <KEY_FILE> <COMMAND>

With Python, you use further modules like paramiko to establish SSH connection and run commands remotely.

client = paramiko.SSHClient()
client.connect(ip, username=username, key_filename=key_filename, port=port,
                           timeout=timeout,
                           auth_timeout=timeout)

stdin, stdout, stderr = ssh.exec_command("whoami")

For readability purposes, I list further commands unwrapped. In your script, you might want to use elementary helper subroutines to have calls like ssh_call(command).

D. Install and configure Docker daemon

As said above, anything else from here is running remote commands.

That is, the concluding steps are, following an example in the official documentation; proceeding is same as you would do it locally, but you need to wrap it in your script and pass over intermediate results if required:

  • install and configure Docker daemon
  • login to your Docker registry if needed
  • pull image
  • run Docker container
    sudo yum update -y
    sudo amazon-linux-extras install docker
    sudo service docker start
    sudo usermod -a -G docker ec2-user
    # relogin or continue with sudo, which you shouldn't
    aws ecr get-login --no-include-email --region region


E. Run container

Pull custom image and run container from a private registry in AWS.

    docker pull aws_account_id.REGISTRY.ecr.REGION.amazonaws.com/REPOSITORY/IMAGE:TAG
    docker run --name MYSERVICENAME -d -p PORT_HOST:PORT_CONTAINER IMAGE:TAG

Pull an official public image and run another container listening on port 8001 in a single step:

    docker run --name nginx -d -p 8001:80 nginx:latest


Alternative scenario with ECS

As described in these Python/boto3 examples here and here, it's possible to define an machine cluster with the AWS ECS service and run containers there.

Please note that ECS cluster offers you an abstraction either from AWS EC2 or AWS Fargate to deliver the actual virtual machines.

If we go for EC2, again we need to pick up a recent machine image, but this we need a machine optimized for ECS. In a nutshell, these machines will have an ECS agent; through its configuration, we assign a machine to our target cluster.

A small bonus is also that ECS machines have Docker daemon already preinstalled.

So, we create an ECS cluster and add a virtual machine to it.

ecs_client.create_cluster(clusterName=cluster_name)

ec2_client.run_instances(
        ImageId=AMI_ID,
        MinCount=1,
        MaxCount=1,
        InstanceType="t2.micro",
        IamInstanceProfile={
            "Name": "ecsInstanceRole"
        },
        UserData="#!/bin/bash \n echo ECS_CLUSTER=" + cluster_name + " >> /etc/ecs/ecs.config"
    )

In the cluster, we can create a task definition of the service which are both abstract of the container concept.

ecs_client.register_task_definition(
        containerDefinitions=[
        {
          "name": "<MY_SERVICE>",
          "image": "<MY_IMAGE>",
          "portMappings": [
            {
              "containerPort": 80,
              "hostPort": 80
            }
          ]
        }
        ],
        family="hello_world"
    )

Now, possibly you ask yourself, how would the Docker daemon on the instance know about your private registry to pull the specified image? For that, the machine's ECS agent needs additional configuration.

Now, you can launch the service, which will use the previously defined task to tell the previously authorized Docker daemon to pull the previously built and published image from ECR and run the container on the machine you've included in the ECS cluster in the data center house that Jeff built.

ecs_client.create_service(
        cluster=cluster_name,
        serviceName=service_name,
        taskDefinition=task_name,
        desiredCount=1,
        clientToken='request_identifier_string',
        deploymentConfiguration={
            'maximumPercent': 200,
            'minimumHealthyPercent': 50
        }
    )

Configuration of security groups in the ECS context has been highlighted on StackOverlow already. In short, these are further calls to the API parts related to security group management.

Conclusion

Both Bash and Python implementation options have their benefits and drawbacks.

Seen on long term, Python being a general purpose programming language with large community, allows complex string manipulation and abstractions from a cloud API or a system process.

Without knowing the real goal of the deployment, an end-to end of above steps might not be worth the effort in terms that for deploying a set of related containers which is more typical scenario, other approaches could be more pragmatic.

Before stepping into technical implementation, best is to make well elaborated decisions and choices addressing business goals and budget:

  • how many times will automation need to run and in which time period vs. time to implement it end-to-end from the beginning?
  • assess whether you need only EC2, or ECS with EC2 either Fargate for stateless tasks, or maybe you need Kubernetes?

Further reading: "The Benefits of Managed Kubernetes vs. Amazon ECS" (short argument from a cloud consultant blog as of August 2019).

A 2018 CNCF survey cites 83% of organizations use Kubernetes as their container orchestration solution vs. 24% for ECS.

11
  • 1
    With boto3, you can wire up an ECS client object (where you define clusters and services) with ECR client (where you get repository deployment token). I've found a small and nice GitHub example on this, please tell me if you want me to add an answer section along these lines. github.com/AlexIoannides/py-docker-aws-example-project/blob/…
    – Ta Mu
    Commented Jan 4, 2020 at 20:40
  • 1
    Will do so in a couple of hours.
    – Ta Mu
    Commented Jan 5, 2020 at 7:32
  • 1
    Done. Please review.
    – Ta Mu
    Commented Jan 6, 2020 at 14:28
  • 1
    Yes, you need to provide ECS client a role it can come up with to ECR. As a quick shot: please check this docs.aws.amazon.com/AmazonECS/latest/developerguide/… and compare to boto3 iam specs and tell me whether it helps.
    – Ta Mu
    Commented Jan 6, 2020 at 21:02
  • 1
    Possibly you need to step into some debugging: SSH to ec2 instance, check container logs, open a terminal to the container and validate what is happening. Maybe post a new question :-)
    – Ta Mu
    Commented Jan 7, 2020 at 6:37

Not the answer you're looking for? Browse other questions tagged or ask your own question.