Migrating your application to the cloud can be hard, especially when you are not sure which way or resources to use it. If your application runs on Docker containers, it is no different. Migrating container applications to AWS can be achieved through several approaches, one of which is through AWS Elastic Container Service. ECS is a highly scalable container orchestration service that supports Docker containers. There are two methods to manage your containers, one is using EC2 instances, and the other is using AWS Fargate.
In this post, I will guide you through migrating a container application to AWS Fargate and automating the deployment process using Terraform, an infrastructure as code (IAC) tool. The application details here are not important at this point, but for the sake of the case, let’s assume we are hosting a Java backend API application on docker containers.
And for better understanding, I have separated the post into two parts: Infra and CICD. The first part covers all necessary steps to deploy your application, and the second is how to automate its deployment. By the end of this series, you can deploy the project all together or by each module. Thus, bear with me and I hope you enjoy the ride.
Part 1 - Infrastructure
On this part, we will cover the creation of the necessary infrastructure to deploy containers on Fargate, and guarantee high-availability, scalability, reliability and other better practices with resources such as: VPC, ALB, Security Groups, ECS Tasks, etc. But before we start, let’s cover some questions.
Why use Fargate?
AWS Fargate is a serverless compute engine for containers that removes the need to manage EC2 instances. AWS will manage the underlying infra for, allowing you to redirect your resources to other aspects. Here are some advantages of using Fargate:
- Fargate is Serverless: No need to provision, configure or scale clusters and VMs to run the containers.
- Simplified Management: Reduce operational overhead and focus on building and running applications.
- AWS Managed Security: AWS manages the underlying infrastructure for you.
Why use Terraform?
Terraform manages your infrastructure using code, allowing for version control and collaboration. It is easy to scale up or down based on demand, and by sharing the templates you can use them for several applications of your organisation. Find out more about Terraform.io
The project
For better understanding, I have separated the project into two modules: Infra and CICD. By the end of the implementation, you can deploy the project all together or by each module.
Architecture
The image below describes an AWS architecture with an Application Load Balancer (ALB) and Auto Scaling Groups (ASG) on ECS that ensures high availability, scalability, reliability, and cost efficiency by distributing traffic, dynamically adjusting resources, and maintaining optimal performance across multiple Availability Zones. The ALB is also set to expect traffic from a custom domain name set on AWS Route 53.
1 – Page Structure & Requirements
Page Structure
The project below is hosted at Git repository: https://github.com/cevoaustralia/ecs-fargate-cicd-terraform
To avoid this blog being too long, instead of posting the Terraform code of each section here, I will be attaching the link of the GitHub repository of the proper file and providing a explanation of what was created.
Requirements
This project requires a Hosted Zone ID, Name and ACM Certificate ARN to be provided. Since it’s a one-time creation, you can do that directly into the AWS console.
2 – Network configuration
In this section, network resources are created: VPC, public and private subnets, Internet and Nat Gateway and lastly the route tables configuration.
VPC, Internet/Nat Gateways and Route tables
The first thing to do is to define the Network configuration of the project. Resources such as VPC, public and private subnets, Internet and Nat Gateways and route tables configuration are created on the links below:
VPC File:
- vpc.tf
- Link: https://github.com/cevoaustralia/ecs-fargate-cicd-terraform/blob/main/modules/infra/vpc.tf
Subnets, Internet & Nat Gateways and Route Tables
- File: network.tf
- Link: https://github.com/cevoaustralia/ecs-fargate-cicd-terraform/blob/main/modules/infra/network.tf
By installing a VPC in Sydney region in two AZs, we are providing a high available architecture.
Application Load Balancer
Application Load Balancer distributes incoming application traffic across the ECS Fargate tasks within the multiple Availability Zones. Here ALB is configured to redirect HTTP traffic to HTTPS through ALB listeners. With that, we ensure all traffic is encrypted using SSL/TLS, protecting data in transit. This creates the ALB target group with a health check API that ALB will use to control traffic distribution. Make sure your application has an API with the defined URL set on the path parameter, making it more convenient you can set this on the variables section.
Note: Ensure your ALB HTTPS listener is configured correctly with an SSL certificate. You can use AWS Certificate Manager (ACM) to manage your SSL certificates
- Application Load Balancer
- File: alb.tf
Application Load Balancer
As mentioned above, our Java application uses a custom domain name, and to use the domain, we need to configure the ALB to receive traffic from such domain. For that, it is necessary to create a Hosted Zone record of type A.
Hosted Zone Record
- File: route53.tf
- Link: https://github.com/cevoaustralia/ecs-fargate-cicd-terraform/blob/main/modules/infra/route53.tf
3 – ECS, ECR & Auto Scaling Configuration
Now with the network set, we define ECS configuration with ECR, ECS cluster, service and tasks necessary to deploy our backend API container.
ECR
A fully managed Docker container registry that makes it easy to store, manage and deploy Docker container images.
ECR
ECS
Here we define the ECS Cluster, ECS service and ECS Tasks necessary for the application. ECS Service ensures the specified number of tasks instances are running and maintains high availability of the application. While ECS task is a definition, the running instance that runs on the serverless Fargate environment. Each task runs in its own isolated compute environment.
ECS
- File: ecs.tf
Auto Scaling Configuration
In Fargate, you need to create auto scaling policies yourself. While AWS Fargate abstracts away the need to manage the underlying infrastructure (such as EC2 instances), you still need to define how your ECS service should scale in response to changing demand. Here we are creating auto-scale policies by CPU and Memory utilisation metrics.
ASG
- File: asg.tf
4 – Permissions
By now you have realised that we need to configure the necessary permissions for the resources to be executed. First, we configure the Security Groups then the necessary IAM roles.
Security Groups
The first security group is for the ALB, it allows access only via TCP ports 80 and 443 (aka HTTP and HTTPS). The second security group is for ECS task, allowing ingress access only from ALB and to the port that is exposed by the ECS task.
Security Groups
- File: security_group.tf
IAM Roles
ECS Task requires two roles to work right. One of which is for the task execution and the other is for the task itself. Sounds confusing, right? So, let’s dig deeper:
- Task execution role: If you were executing ECS on EC2 instances, you will be required to set permissions to the instance, such as pulling an image from ECR, registers tasks, etc. The same applies for Fargate, for the task to perform the necessary executions into the “serverless” Fargate environment, this execution also process require the necessary permissions.
- Task role: This role regulates what AWS services the application has access to. In other words, the AWS services the application running on the containers needs access to. For instance, if you are deploying a back-end API that loads data from DynamoDB, you will need to set the necessary permissions to the DynamoDB service.
Note: This project does not connect to any database, which is very unlikely. To make the case clear, I have added permissions to the Task to access DynamoDB for better understanding.
IAM Roles
- File: iam.tf
5 - Variables
Terraform variables are well-defined inputs that allow for flexible, reusable, and maintainable Terraform configurations by parameterising infrastructure settings. And for this project, I have set a list of variables that you will need to configure.
Variables
- File: variables.tf
Testing
Wow, that’s a lot! Now we are finally able to test it, right? Right? Almost there… Before that, you just need to configure the access to your AWS account. It is quite simple; you just need to set the AWS variables I have created above on a separated .tfvars file. To guide you on this step, you can follow Terraform official docs here Terraform_AWSProvider.
After setting up your configuration, run the following commands:
- terraform init
- Plan (you can do all together or just the module)
- terraform plan -out=tf.plan
- terraform plan –target=module.infra -out=infra.tfplan
- Apply
- terraform apply tf.plan
- Terraform apply –infra.tfplan
Part 2 – CICD
In this last part, you will learn how to automate the deployments of the containers of your application into ECS Fargate.
Architecture
The process is quite simple. When a new code is merged to the application, AWS CodePipeline will trigger a build that generates a Docker image of the application and push the image into ECR. Then, the image is deployed into Fargate updating the version of the “production” version.
1 – Requirements
Container Application
As mentioned before, this project requires an application demo integrated with Docker. This application will have a Docker file to generate the image and a Health Check API as defined into the Application Load Balancer target group. Lastly, you need to set the repository attribute into the CICD parameter variable.
AWS GitHub connection
AWS CodePipeline requires connection to GitHub repository to be triggered by new pushes and to build the application. To do so, you have to establish the connection with CodeStar via AWSCodeConnections. This is a one time configuration that can be made on AWS Console. Follow the steps from AWS official doc: AWSCodeStartConnection
2 – AWS CodePipeline
Below, we create AWS CodePipeline and all required resources, such as bucket and necessary permissions to trigger the deployment.
- CodePipeline
- File: codepipeline.tf
- Link: https://github.com/cevoaustralia/ecs-fargate-cicd-terraform/blob/main/modules/cicd/codepipeline.tf
3 – AWS CodeBuild
On this last step, we configure the application Docker image build and its push to the ECR.
- CodeBuild
- File: codebuild.tf
- Link: https://github.com/cevoaustralia/ecs-fargate-cicd-terraform/blob/main/modules/cicd/codebuild.tf
Testing
After setting up your CICD environment, we can test your first application build, run the following commands:
- terraform init
- Plan (you can do all together or just the module)
- terraform plan -out=tf.plan
- terraform plan –target=module.cicd-out=cicd.tfplan
- Apply
- terraform apply tf.plan
- Terraform apply –cicd.tfplan
Conclusion
Now that was a lot, A LOT! I know this amount of code can be confusing and hard to follow, so to help in the process I have uploaded my code example in this Git repository: https://github.com/cevoaustralia/ecs-fargate-cicd-terraform
I hope with this blog post, you are now able to migrate your applications to ECS and most of all, use Terraform to create your infrastructure.
Until next time.
Alan Terriaga