In part 8 in my latest series showcasing the six pillars of the AWS Well-Architected Framework, we continue to take a look at the Sustainability pillar. The Sustainability pillar goal is to provide design principles, operational guidance, best-practices, potential trade-offs, and improvement plans you can use to meet sustainability targets for your AWS workloads. If you’d like to learn more about the other pillars of the Well-Architected Framework, check out the other blogs in this series via the links below. Otherwise, let’s get stuck in!
What we will be covering today
- Building sustainably in the cloud
- Building sustainably in the cloud
- Improving sustainability practices
Why we are learning this
- To help others better understand the concepts of sustainability
- Using the AWS Well-Architected: Sustainability Pillar to drive better awareness of the impact of how we operate in the cloud
How this will help me
You will:
- Understand good practices for running a sustainable cloud operating model
- Be able to help champion cloud sustainability across your organisation
- Implement good sustainability practices into your organisation
What is Sustainability in the Cloud?
Sustainability is about reducing the environmental and socioeconomic impact of operating workloads in the cloud. With respect to the environment, this is in context to reducing power and cooling consumption to reduce the environmental impact this places on resources. The socioeconomic impact is closely related to “development that meets the needs of the present without compromising the ability of future generations to meet their own needs.” Your business or organisation can have negative environmental impacts like direct or indirect carbon emissions, non recyclable waste, and damage to shared resources like clean water. Operating the cloud sustainably is therefore socially and environmentally responsible.
Building Sustainably in the Cloud
Do I Need This?
It may sound like an obvious question, however in the context of Sustainability, reducing the footprint in the cloud is what drives a reduction in the wasting of resources such as power and cooling. While we as customers of the cloud don’t directly pay for consumption of these resources, we are indirectly responsible for not consuming more than we need. Not only does this reduce the footprint on resources but also has cost savings benefits in reduction of resourcing management overhead and the bill.
Can I Turn it Off?
Turning off resources when they aren’t needed just makes sense. A great target for this are development environments. Using resource tags against schedulers is a great way to enable the best of having persistent resources and those that can be torn down at the end of the day. Scheduling resources only when you need them is also a great way to save on costs but also helping the environment too through reduced power consumption. Some great open source tooling exists to make this job a lot easier. A great example of a resource scheduler using tags is Lights Off AWS: https://github.com/sqlxpert/lights-off-aws
Right-Sizing Workloads
Using only what you need is a mantra often lost in cloud contexts – with virtually unlimited resources available, it is tempting to have the mindset of overprovisioning (a common trait of traditional workload models in on-premise or private cloud). With auto-scaling capabilities, there is less of a barrier to providing for load conditions and less need for overprovisioning. Instead of that really large RDS Instance to cope with transient heavy read loads, consider architecture that uses read replicas to reduce the overall capacity size required by the Primary database to support your peak loads (https://aws.amazon.com/rds/features/read-replicas/).
Utilise Serverless Infrastructure Where Possible
Using serverless architecture where possible creates less wasted resources. Even if a monolithic application does not support a complete serverless architecture, it might be worth considering where components of the application could indeed use managed services and/or serverless services. This is referred to as replatforming. Utilising services such as Lambda, Aurora Serverless, Redshift Serverless, EKS with AWS Karpenter Auto-scaler and ECS Service Auto Scaling are all good options when suited.
ARM Your Workloads
Here is where things get really interesting. Did you know that AWS has custom silicon that utilises ARM (Advanced RISC Machine) instead of x86_64 CPU architecture (read about the differences here)? It is called the Graviton family and is now in its 3rd generation. The race to ARM architecture is epitomised in the 2021 release of Apple Macbooks that offered massive leaps in performance while also significantly reducing power consumption over their x86_64 predecessors. The first Graviton processor was released by AWS in 2018, however, ARM architecture goes back to 1983 with the original Acorn RISC Machine.
AWS Graviton processors are custom-built by AWS to deliver the best price-performance for cloud workloads. The Graviton processor is one of three processor options and powers Amazon EC2 instance types for general purpose, compute-optimised, memory-optimised, and storage-optimised use cases. Instances powered by Graviton are available in most AWS Regions, as well as GovCloud and the AWS China Regions.
Launched in 2019, Graviton2 is the second generation of AWS Graviton processors. Graviton2-based instance types offer up to 40% better price performance compared to fifth-generation instances. (The first generation (A1) of Arm-based, Graviton-powered EC2 instances was launched at re:Invent 2018.) The feature set of the Graviton processor is optimised for cloud workloads and offers the following benefits:
- Large L1 and L2 caches for every virtual central processing unit (vCPU), which means a large portion of your workload will fit in cache without having to go to memory.
- Every vCPU is a physical core, meaning more isolation between vCPUs and no resource sharing between vCPUs except last level cache and memory system.
- Cores connected together in a mesh with ~2TB/s of bisection bandwidth, allowing applications to move very quickly from core to core when sharing data.
- Graviton’s memory architecture means you don’t need to worry where application memory is allocated from, or which cores are running the application.
Graviton instances also support Fargate and can be used to run ECS tasks and EC2 instances through Fargate. For ECS tasks, this is declared inside your ECS task definition and is as easy as this:
{ |
https://docs.aws.amazon.com/AmazonECS/latest/userguide/ecs-arm64.html
Improving Sustainability Practices
Use Services Responsibly
Using the right service for the job is a great start towards improving and beginning a sustainability mindset. It starts with making decisions early on about asking if the service is right for what you are trying to achieve. Different services have different impacts to the potential impacts to environmental factors such as power and cooling. Everything is running on physical infrastructure somewhere and as such we must be mindful of the fact that some service consume more energy than others.
Innovation is key to achieving sustainability goals—challenges such as decarbonisation of operations to water conservation are addressed through technologies that drive sustainable transformation. AWS enables customers to build sustainability solutions ranging from carbon tracking to energy conservation to waste reduction, using AWS services to ingest, analyse, and manage sustainability data.
Turn Off or Remove Unused Infrastructure
If it is on, it’s using power, cooling and potentially other indirect resources. Turn it off or tear it down!
Track Your Impacts
You can track your own impacts towards sustainability using the Customer Carbon Footprint Tool.
Undertake a Well-Architected Review
Using a partner such as Cevo to conduct an AWS Well-Architected Review on your organisation, you can gain insights into how your organisation can work more sustainably using a Sustainability Pillar Lens.
Utilise AWS EC2 Spot Instances
This may sound counter-productive towards sustainability, however Spot Instances run on unused spare EC2 capacity hardware in AWS Data Centres which helps AWS improve data centre utilisation. Spot Instances provide up to a 90% discount compared to On-Demand prices and can be used for various stateless, fault-tolerant, or flexible applications. Spot offers the ability to run hyperscale workloads at a significant cost savings and helps AWS operate more efficiently.
Take Advantage of Graviton Instances Where Supported
Common application packages like Python, Nodejs, Ruby, AWS CLI, AWS CDK, Terraform, .NET Core and .NET Framework, Java and many others all support native ARM support. With ECS and EKS support for ARM instances also, it makes sense if it is supported, to take advantage of this. It will not only reduce costs but also reduce the environmental impacts both directly and indirectly.
Implement Sustainability Into Processes
Adding sustainability to your list of business requirements can result in more cost-effective solutions. Focusing on getting more value from the resources you use and using fewer of them directly translates to cost savings on AWS as you pay only for what you use.