Docker CLI Integration with Amazon ECS

For a number of years now, we have been using docker-compose to define our multi-container docker applications. Almost always, that has been in the context of local development or automation testing as part of our continuous integration and deployment pipelines. Perhaps we used docker-compose in proof-of-concepts and small non-critical applications hosted on a single server but generally this is where docker-compose ended its use in the development lifecycle. An extremely useful tool for local development and testing. When running our containers in upstream environments, we look to orchestration engines and managed cloud services such as Kubernetes, Amazon ECS and Amazon EKS.

Built-in integration with Amazon ecs

Recently, Docker announced built-in integration with Amazon ECS. Now, using native Docker Compose CLI commands, we can deploy our multi-container applications to Amazon ECS. What is neat about this integration is that the Docker tooling provisions the ECS environment for us (more on that below) and makes it (almost) seamless using a new `context` command line option. Using `context` we can switch between local and our ECS environment(s) and still use our docker-compose definitions and constructs.

Does this mean we should use this in production?

Well hold on there :face_with_cowboy_hat:!!

While this does extend the use of docker-compose off the local development environment (or CI/CD pipeline) It is not meant to replace production grade container orchestration engines like K8s, EKS or ECS. What it does excel at is allowing us to extend our developer toolchain into the cloud and accelerate the testing of our integration with other AWS services earlier in the development lifecycle.

Ok. So how does all this work?

To demonstrate this, I went and grabbed a typical Docker Compose 3-tiered demo app from Docker’s GitHub repository and made a few tweaks to make the demo easier. You can grab the demo solution from my GitHub repository if you want to follow along at home (https://github.com/ScottScovell/docker-ecs-demo/tree/feature/part-1).

The demo solution consists of a React.js frontend with a Node.js backend API and MySQL database.

version: “3.7”

services:

  backend:

    build:

      args:

      – NODE_ENV=development

      context: backend

    command: npm run start-watch

    environment:

      – DATABASE_DB=example

      – DATABASE_USER=root

      – DATABASE_PASSWORD=/run/secrets/db-password

      – DATABASE_HOST=db

      – NODE_ENV=development

      – PORT=80

    ports:

      – 80:80

    secrets:

      – db-password

    volumes:

      – ./backend/src:/code/src:ro

      – ./backend/package.json:/code/package.json

      – ./backend/package-lock.json:/code/package-lock.json

      – back-notused:/opt/app/node_modules

    networks:

      – public

      – private

    depends_on:

      – db

  db:

    image: mysql:8.0.19

    command: ‘–default-authentication-plugin=mysql_native_password’

    restart: always

    secrets:

      – db-password

    volumes:

      – db-data:/var/lib/mysql

    networks:

      – private

    environment:

      – MYSQL_DATABASE=example

      – MYSQL_ROOT_PASSWORD_FILE=/run/secrets/db-password

  frontend:

    build:

      context: frontend

      target: development

    ports:

      – 3000:3000

    volumes:

      – ./frontend/src:/code/src

      – /code/node_modules

    networks:

      – public

    depends_on:

      – backend

networks:

  public:

  private:

 

volumes:

  back-notused:

  db-data:

 

secrets:

  db-password:

    file: db/password.txt

For the un-initiated, the power of docker-compose comes from the ability to spin “up” even a moderately complex application stack like this on your local machine in seconds and start developing. Let’s do that now and “up” our local development environment.

docker-compose –context default -f docker-compose.local.yaml up –build -d

You may notice two things jump out about the command in the line above. Firstly the use of the `context` parameter. This new parameter works akin to profiles in the AWS-CLI if you are accustomed to that. Here we are explicitly setting the Docker CLI context to the default (or local) context. Secondly, we are explicitly specifying the docker compose yaml file to use rather than the default `docker-compose.yaml`. More on that below.

First time through you’ll likely need to pull the base Docker images down so it may take a while (after that it only takes a few seconds). Our local docker-compose file tells the docker engine to build each of our docker images if they do not already exist locally in our cache. When complete you should see something like this

Successfully tagged docker-ecs-demo_frontend:latest

Creating docker-ecs-demo_db_1 … done

Creating docker-ecs-demo_backend_1 … done

Creating docker-ecs-demo_frontend_1 … done

From a browser you also should be able to navigate to the React frontend using localhost:3000 and invoke the backend API service on port 80.

 

Just as easily, we can also tear down the environment using the docker toolchain

docker-compose –context default -f docker-compose.local.yaml down

Stopping docker-ecs-demo_frontend_1 … done

Stopping docker-ecs-demo_backend_1 … done

Stopping docker-ecs-demo_db_1 … done

Removing docker-ecs-demo_frontend_1 … done

Removing docker-ecs-demo_backend_1 … done

Removing docker-ecs-demo_db_1 … done

Removing network docker-ecs-demo_private

Removing network docker-ecs-demo_public

Note: I have pulled together a number of related commands into targets of a Makefile just to make the demo easier. This allows us to be explicit regarding with Docker context and corresponding docker compose yaml file to use without the lengthy command.

make dev-up

make dev-down

Using Docker to Integrate with Amazon ECS

Now that we have the local development environment running fine, let us look into using Docker to integrate with our AWS Account and “up” the same environment in Amazon ECS.

Pre-requisite:

  • It is assumed you have the AWS CLI installed and configured. The profile being used must have permission to create IAM, VPC, EC2, ECS and other services (see below). This may be an issue if you are using your organisations AWS Account with restrictive permissions.

Create a New Context

The first step is to create a new `context` for the docker toolchain to use. Docker comes with a `default` context of type ‘moby’ which is the docker engine running on your local development engine. You manage your Docker contexts using the new Docker context command. Let’s list the existing contexts and note the active context in use (denoted using the asterisk)

 

docker context ls

You should see something like the following. Note the currently selected context

Create a new context of type `ecs`

docker context create ecs aws

In the line above, I have created a new ECS context named `aws`. The docker CLI will prompt you for an AWS-CLI profile and region. You can also create a new profile using the Docker CLI as well. You can name this context whatever you like. I used `aws` here and refer to it in the Makefile so make sure you change it in there if you are following my demo code and name your context something else. While you are at it, ensure you change the AWS account and region to match the AWS profile you are using as well.

If we list docker context again we see the following:

At this stage we have only setup the context. We have not deployed any resources into AWS. Only when we “up” our environment using Docker Compose do our resources get provisioned. Likewise, when we “down” our environment, the docker toolchain will clean up those resources so we are no longer charged.

Prepare Docker Compose file

Before we “up” the environment in AWS, we need to do some preparation to our Docker Compose file to align it with our target context.

version: “3.7”

services:

  backend:

    image: .dkr.ecr..amazonaws.com/docker-ecs-demo-backend:latest

command: npm run start-watch

environment:

– DATABASE_DB=example

– DATABASE_USER=root

– DATABASE_PASSWORD=/run/secrets/db-password

– DATABASE_HOST=db

– PORT=80

ports:

– 80:80

secrets:

– db-password

 networks:

– public

– private

depends_on:

– db

  db:

 image: mysql:8.0.19

command: ‘–default-authentication-plugin=mysql_native_password’

restart: always

 secrets:

– db-password

volumes:

– db-data:/var/lib/mysql

networks:

– private

   environment:

– MYSQL_DATABASE=example

– MYSQL_ROOT_PASSWORD_FILE=/run/secrets/db-password

frontend:

image: .dkr.ecr..amazonaws.com/docker-ecs-demo-frontend:latest

 ports:

– 3000:3000

 networks:

– public

depends_on:

– backend

networks:
  public:
  private:

volumes:
  db-data:

secrets:
  db-password:
    file: db/password.txt

Comparing the above with our local docker composer yaml file we have made changes to the following:

  • Build: ECS can’t build our image as the docker engine does locally so we need to reference already built and tagged container images residing in a container registry like DockerHub or, in our case, Elastic Container Registry (ECR). We replace the build definitions with references to images we will push as part of our workflow.
  • Volumes: During local development, we used persistent volumes to allow sharing of source code between our local IDE and the application code residing in the container. In ECS this is no longer required so we can remove this for our frontend and backend services. The database service (MySQL) will still need a persistent volume, so we don’t lose data between restarts of our container.

To preview the AWS resources that Docker will create for us we run the following docker command

docker –context aws compose convert -f docker-compose.ecs.yaml

 

Let us take a closer look at those resources created and how they relate back to our docker compose file

 

AWS Resource

Docker Compose

Description

ECS Cluster
CloudMap Namespace
LoadBalancer

(common)

ECS cluster provisioned to run our multi-container application stack

CloudMap namespace to manage our service discovery endpoints for each container service

LoadBalancer for our exposed service endpoints

ECS Service
ECS Fargate Task Definition
IAM Execution Role
LoadBalancer Listeners
LoadBalancer Target Groups
CloudMap Service Entry
Security Group Ingress Rules

service:

ECS service and task definitions for each container service

  • Frontend React app
  • Backend API
  • MySQL Database

Sidecars containers for

  • Secrets
  • Service Discovery

Load balancer entries for ingress ports

Service discovery entries for managing inter-service communications

  • Frontend to Backend
  • Backend to Database

   – ports:

Security Group Ingress Rule for each exposed port defined for the service

   – secrets:

(see secrets: below)

   – networks:

Security Group for allowing traffic between configured networks (see networks: below)

Security Group

networks:

Public network security group

Private network security group

Elastic File Service
Mount Targets

volumes:

EFS file system for persisting and sharing MySQL database files across multiple instances of our database service

Secrets Manager Secret

secrets:

Secrets Manager secret for our MySQL database password

Note: Docker will not create the ECR repositories referenced in your docker compose file. We will need to create these and push the referenced images into the repository, so it is available to ECS.

# Create ECR repositories

aws ecr create-repository –repository-name docker-ecs-demo-frontend

aws ecr create-repository –repository-name docker-ecs-demo-backend

 

# Login into ECR

aws ecr get-login-password –region <your aws region> | docker login –username AWS –password-stdin <your aws account>.dkr.ecr.<your aws region>.amazonaws.com

 

# Tag latest images with ECR repository name

docker –context default tag docker-ecs-demo_frontend:latest <your aws account>.dkr.ecr.<your aws region>.amazonaws.com/docker-ecs-demo-frontend:latest

docker –context default tag docker-ecs-demo_backend:latest <your aws account>.dkr.ecr.<your aws region>.amazonaws.com/docker-ecs-demo-backend:latest

 

# Push images to ECR

docker –context default push <your aws account>.dkr.ecr.<your aws region>.amazonaws.com/docker-ecs-demo-frontend:latest

docker –context default push <your aws account>.dkr.ecr.<your aws region>.amazonaws.com/docker-ecs-demo-backend:latest

The first time we run this, it will take some time to push each layer of our docker images into ECR. Subsequent changes to our images will only require those modified layers to be pushed.

"up" the multi-container application

Now let us “up” our multi-container application using our ECS docker compose file. We can either switch our current context to the one we created above (of type ecs) or we can just specify which context we want to apply the command to. I prefer the later just as a safe practice to ensure I don’t inadvertently spin up or send changes to the wrong context. If we wanted to switch context, we’d just use the `docker context use aws` command setting that context as the default.

docker –context aws compose -f docker-compose.ecs.yaml up

Docker will generate the CloudFormation template we previewed above and apply it to the target environment as defined by the AWS Profile we configured against the context. Depending on the complexity of your application stack this may take 5-10 mins. We will see something like the following:

Note: Again, I’ve bundled the above workflow these into a handy make target, so we always remember to push the latest docker image into ECR before spinning up our ECS environment. Review the Makefile first and change to target your AWS environment.

make ecs-up

When that completes, we should be able to navigate to our load balancer endpoint and verify our application stack is running in AWS. Find the DNS name of the load balancer endpoint for the stack we just created.

aws elbv2 describe-load-balancers | grep ‘LoadB’

Open a browser and navigate to our DNS host name on port 3000. We should see our React application running in AWS. DNS may take some minutes after our AWS resources have been created to propagate as well so wait a few mins for that to update if you are having issues the first time.

Change to application and re-deploy

Now let us make a change to our application and re-deploy. We will make a simple change to the message being displayed from the backend. Currently it queries for and displays the name of the MySQL database used. Let’s make a small change to this text in the `server.js` file in our backend API.

Now that we have made our change, let’s test it locally using our local docker compose file and default docker context.

make dev-up

Happy with our local change, let’s push it to ECS using our ECS docker compose file and aws docker context

make ecs-up

In the console we notice that our backend docker image has a new layer (change) that gets pushed to ECR and that a change to our ECS environment has also been triggered. A new ECS task definition version has been created informing ECS that a new container image is available and that instances of our backend service should be replaced with the new version.

Refreshing (or navigating to our AWS hosted application) should show the change in our ECS environment.

Clean up

To end, let’s clean up our resources so we are not running up costs when we are not using the resources. As mentioned before, we can easily do this the same way we do locally using the docker command line to tear down our environment.

docker –context aws compose -f docker-compose.ecs.yaml down

Note: If you follow along with the demo project you can use ‘make clean’ after the above to clean up the ECR repositories created as part of this walkthrough

Also note that by default, EFS volumes are retained so you will need to clean up those resources separately if you are not intending to share them with other members of the development team.

Summary

In this post we have seen how we can apply familiar Docker toolchain commands and workflows used for local application development with containers to integrate with Amazon ECS. While not “seamless” by any notation of the word, we can see that integration with ECS is easy enough to configure and incorporate into development workflows. Having said that here are a few observations and suggestions on how this experience might be improved:

  • It would be good not to maintain context specific docker-compose files That is, one for local dev and one for ECS dev. Perhaps some context specific attributes might be an approach here that are ignored by contexts that don’t support/need them.
  • Add and display CloudFormation template outputs after we “up” our environment. This way we don’t need to go hunting for load balancer endpoints
  • Ran into some issues defining different ports for host and container instance when using ECS context type (e.g. Expose port 80:3000)

 

In future posts we’ll explore further integration features and look at how we can share and re-use existing resources in our AWS account and how we might swap out containers used only for local mocking with cloud native services more representable of upstream production environments.

Enjoyed this blog?

Share it with your network!

Move faster with confidence