In today’s article, we are moving on from our recent series on CloudFormation (at least for the moment) and taking a look at AWS CodePipeline. CodePipeline is an AWS-managed service focused on providing a continuous delivery mechanism to help automate your releases and deployments into your AWS environment.
Codepipeline Concepts
Before we get into how CodePipeline works and how we can leverage it we need to understand the concepts behind the service. At its core, AWS CodePipeline is a service that runs “Pipelines” which are a series of “Stages” that group “Actions” together in order to manipulate our “Artifacts” in order to get it to a state where we can deploy it into production. But what does all of that mean?
Pipelines
We start off with the Pipeline which is just the container or construct that holds all of our steps together. We will want to have a pipeline for each workload/application that we want to automate. For the purposes of an example, let’s say we have a git repository with our recently written CloudFormation code in it. And we want to automate the deployment of that code into our production environment. In that case, we’d create a new Pipeline to hold all the tasks we’ll need to make that possible.
Artifacts
Artifacts are simply just the collection of data that is worked on by a pipeline action. It is what we are trying to get from our repository into our environment. It can be as simple as a CloudFormation template file, or as complex as an entire application git repository. Artifacts are what are worked on and manipulated by each of the action steps within a Pipeline.
Stages
The next piece we’ll need is a stage. A Stage is just a logical unit that can be used to group a series of Actions together to help maintain the pipeline over time. It also helps make it clearer as to what part of the process the pipeline is at. For our example, we’ll probably need 3 stages:
- Source Code
- Test
- Deploy
This helps clearly delineate the actions being taken that relate to the testing of our code vs those responsible for deployment.
Actions
Actions are the tasks/activities that the code pipeline will execute. The actions are what we use to undertake tasks as a part of our pipeline. There are six(6) types of actions that we can take as a part of a CodePipeline Pipeline:
- A source action is responsible for getting our source code from its originating location (a Git Repository or S3 Bucket)
- Build Typically leveraging a build provider such as CodeBuild, Jenkins, or TeamCity the build actions help compile/package our code
- Test Actions responsible for validating our code’s functionality. This might be CodeBuild, BlazeMeter, Ghost Inspector, or others.
- Deploy Typically leverages an AWS service to take our source code and update our environment. This might be as simple as uploading our artifacts to an S3 bucket or deploying an update CloudFormation Stack
- Approval Not all organisations are capable of having changes deployed straight into production without a manual review process. The Approval Stage allows us to leverage notification services such as SNS to request approval from outside of our pipeline prior to continuing.
- Invoke Sometimes we need to interface with other services as a part of a pipeline. We might need to interface with the application to let it know an update is occurring, or leverage a versioning system. Invoke action types allow us to interface with Lambda/Step Functions to achieve this.
How do I get started with Codepipeline
The easiest way to get started with CodePipeline is to leverage it to automatically deploy your CloudFormation templates from your git repository. This is where I normally start for each of my projects.
I start by creating a brand new pipeline with two stages in it: Source and Deploy. The source stage is responsible for getting the CloudFormation template and making it available as an artifact I can leverage in the Deploy stage. If I’m using a CodePipeline supported Git Repository (GitHub, CodeCommit, or BitBucket Cloud) then I’ll leverage the standard Git integration. If I’m using something else (such as Bitbucket on-premises or GitLab for example) then I’ll have that tool upload the code to an S3 bucket and integrate the Source stage with the bucket.
Once I’ve got my CloudFormation available as an artifact I move onto the Deploy stage. Here I leverage the CloudFormation deploy action which will take my template and deploy/update the existing stack within the environment. This results in changes I make to the template being replicated into production within a minute or two of them being committed to the repository.
Once I have those two stages in place I can then use that skeleton to start building the wider infrastructure I’ll need to facilitate a Test stage. While I say that pipelines should ALWAYS have test stages in them, it’s a little hard to test stages if you don’t have a test environment yet. And I don’t want to be deploying things manually until I get one as the additional risk/effort isn’t worth it. By building a simple “two-stage” pipeline out first, I can then iterate on it as the environment grows.