And today I’m back with another instalment in my “Introduction to CloudFormation” series. Last week we took a look at the technology behind AWS CloudFormation and wrote our first stack. In today’s article we’re going to take it a step further and see how we can better define our input parameters. We’ll also dive into the ideas behind splitting our workloads into multiple templates and how we can pass details between them. As mentioned in previous articles, if there is anything specific you’d like us to cover in a future article please feel free to reach out using the contact details at the bottom of the page.
For reference, you can find the previous articles in this series available here:
The Problem Statement
So if we’ve already covered off how to write a CloudFormation template and even enabled it to accept parameters, why are we covering it again. We’ll think about the following scenario you might come across while working in AWS.
You have a new Web server that you’d like to run in your brand new AWS account. This is simple enough, you can use AWS CloudFormation to deploy it via the AWS::EC2::Instance resources. But it’s also going to need a security Group, well we can add that too. And an EFS volume to store persistent data, a Load Balancer, VPC, Subnet, Internet Gateway, NAT Instance, Auto-Scaling Group, Launch Instance… the list goes on. All of this is fine, each of those components can simply be a separate resource within our CloudFormation template and we can use our good old friend Fn::Ref to pass values between each of them. Sure, the CloudFormation template might be a couple of hundred lines of code… but it’s all focused on a unified outcome, deploying our new server.
What if we then need to decouple the workloads on our server and have separate Web, Middleware, and Database roles on different systems. Well, we can add more resources to our template… but it is starting to get a little large now. And then, we need to add an API gateway to our application to support an upcoming Mobile App… what do we do then? Our template is very large now and is also responsible for a lot of different things, configuration management is starting to get complicated.
Seperation of Concerns
In computer science, there is an idea where we want to separate out our computer programs into distinct sections. Each section (or “concern”) is focused on achieving a single task or outcome. You can find more information on the separation of concerns here.
We see this design principle across all areas of the IT landscape from software development to network design. And it’s no different when it comes to CloudFormation templates. In the above example, our template started off doing one thing “deploying our web server”. But over time it had to take on more responsibilities as more components were added to the platform resulting in a large contiguous block of code that’s hard to manage, test, and deploy. This doesn’t even get into the complexities around change/release management when multiple departments are responsible for different resources within the same file.
Applying it to CloudFormation templates
So how might we look to separate the different components of an environment so as to make it easier for everybody to manage and innovate? At the end of the day, there is no right/wrong answer as to what resources to put where… but there are some considerations that can help drive your decisions:
- Areas of responsibility Are different teams/individuals responsible for different parts of the solution? Do you have different teams responsible for networking, Databases, Web environments, etc?
- Different Release cycles Do/Will different parts of the solution require different release cycles? Typically you’ll be looking to iterate on a website a lot more than you will a backend API. In that case, it can make sense to split those two components apart to minimise the potential impact on each other.
- Differing Change/Release Management Procedures Are there different processes that apply to different parts of your application. Some organisations require a security review any time a change might impact an edge location (Load Balancer, Firewall). If your MiddleWare server is in the same template as your Load Balancer, does that mean it needs a review every time your update it… even though you’re not directly changing the Load Balancer?
Result
At the end of the day, we want to minimise the potential impact of a failed change, simplify the management of the code and be able to leverage existing resources when applicable. Taking our example from earlier we might look to break out our solution in the following way:
- Network All the VPC, Subnets, Routing Tables, Associations, NAT Gateways, etc. can go into this template.
- Web/MiddleWare/Database Servers All the required resources to run each of the different workloads. Each of the three templates would contain the resources required to run just that part of the application. EBS file volume, EC2 Instance, Security Group, cluster configurations, etc.
- Presentation The Load Balancers, Elastic IP Addresses, Public Facing Security Groups might all live here. This allows us to have all our public-facing resources in one place.
- API Given the API was deployed to support a different part of the application, it makes sense to be in its own template. It might need a different release cycle than the other components and might even need to be managed by a different team.
OK, so we’ve identified a logical way of separating out each of the different components of our solution, but how do we implement it? We are going to need to refer to resources in the network in almost every other template we’ve defined. Well, like most things in the land of AWS we have a couple of different solutions available to us. The three most common ways are Nested Stacks, Leveraging Fn::ImportValue, and Parameter Store. Each of these options comes with its own benefits, complexities, and idiosyncrasies as we’ll explore in the following sections.
Nested Stacks
At its most basic, Nested Stacks are CloudFormation stacks that are created as a part of other CloudFormation Stacks. In fact, there is actually a CloudFormation resource called AWS::CloudFormation::Stack that is used to deploy another stack. This presents us with an interesting opportunity to separate out different parts of our solution.
In our example, we might have a parent stack called ProjectUnicorn (any good project has something to do with unicorns) which would be responsible for deploying each of the six previously outlined stacks. Each of those six stacks would then need to output the values that other templates will need (for example vpcId). We could then leverage the intrinsic function Fn::GetAtt (outlined here) to get the values we need. A detailed walk-through on how to code this up can be found on the AWS website here.
Personally, I’m not a big fan of Nested Stacks and have seen a lot of problems occur from failed roll-backs. While Nested Stacks do provide a lot of benefits, the downfalls don’t warrant using this approach. However, if you’d like to learn more about Nested Stacks to make your own judgment, you can find more information in the CloudFormation User Guide.
So, if not Nested Stacks then what?
Fn::ImportValue
Next in line is using Fn::ImportValue and Exports to share parameters between Stacks.
If we look back at my previous article I showed how we can output the value from the template so it’s viable within the AWS CloudFormation Management Console. The one part I didn’t mention is that we can expand on our outputs by leveraging one additional property of outputs called Exports.
Exports allow us to well, Export a value so it can be used in a Cross-Stack Reference. By Exporting a Value, we make it available to the other stacks running within our AWS Account. Therefore all we need to do is write an output object for each value we want to publish and make sure to define an Export Value.
Then, in the consuming template, we can use another Intrinsic function called Import Value and simply reference the export name of the originating stack. This allows us to consume the exported value anywhere within out consuming Stack in much the same way we would with Fn::Ref or Fn::GetAtt.
This provides us with a nice level of isolation between originating and consuming stacks reducing the impact of failed deployments. It does however require us to manage/track the different export Names we are using, and a typical way to do this is to make the originating stack name a part of the export name. You can see this in action by following a detailed walk-through of how to use ref to resource outputs in other Stacks in the CloudFormation User Guide.
I typically use this for customers just getting started with AWS and those of don’t have complex parameter management needs. It provides a lot of flexibility without the need to deploy additional resources into the environment.
Parameter Store
Leveraging Parameter Store is the third way we can share information between stacks. Within a CloudFormation template, we can create a Systems Manager Parameter using the AWS::SSM::Parameter resource type. We can then use Fn::GetAtt and Fn::Ref to put the required values into these parameters. This gives us a central location that we can use to store the current configuration of our environment.
We can then consume the values of these parameters by leveraging an SSM Parameter type of CloudFormation Parameter. We can simply define the name of the parameter that contains the value we want, and from there handle it like any other parameter.
This pattern provides a much higher level of isolation between the source and destination CloudFormation Stacks. That in turn limits the impact a bad/incorrect deployment might have. It also allows us to centralise our configuration across all the current stacks. Finally, we can do some interesting things like trigger events when certain parameters change (useful if an update needs to be triggered within the application).
This is my preferred method of managing parameters/exports at the moment due to the flexibility it offers. In addition, it means I can use the same pattern for storing my CloudFormation and application parameters. More information on using Parameter Store can be found here
Conclusion
In conclusion, there are a lot of benefits to splitting up our workloads into multiple templates. And AWS provides us with a number of ways that we can get viability and communicate parameters between these stacks. Finally, I’ve given you a bit of a peek into how I approach this topic and the tools I use on a daily basis. In a future article I’ve dived deeper into the specific mechanisms and patterns I use to manage Parameters across multiple stacks/AWS Accounts. In the meantime, if there are any other areas you are interested in… feel free to reach out using the contact details below.