AWS App Mesh is an Amazon Web Services (AWS) offering designed to streamline the management and oversight of microservices applications. This service tackles the intricacies of orchestrating communication among various microservices within a complex application and offers comprehensive monitoring and insights into these interactions. App Mesh is designed to improve the reliability, scalability, and performance of microservices-based architectures.
Basic Concepts
To comprehend how AWS App Mesh operates, it’s essential to grasp several key concepts outlined below.
Microservices Architecture
Microservices architecture revolves around decomposing an application into smaller, self-contained services that are loosely interconnected. Think of microservices as separate building blocks in a large structure. Each block can do its own thing, but they need to connect and work together. Each service can be independently developed, deployed, and scaled. While this approach offers flexibility and scalability, it also introduces challenges related to managing communication, monitoring, and control of these services.
Service Mesh
A service mesh is a dedicated layer of infrastructure that manages communication between microservices. It delivers capabilities such as service discovery, load balancing, traffic management, encryption, and observability. AWS App Mesh is an implementation of the service mesh pattern. It acts as a control plane that helps microservices communicate with each other efficiently, securely, and reliably.
Envoy Proxy
Envoy Proxy is an open-source, high-performance network proxy and communication layer designed for modern, cloud-native applications. It is often used as a data plane component in service mesh architectures. Envoy proxy acts as an intermediary between different microservices, facilitating secure and efficient communication while providing various features for traffic management, observability, and security.
AWS App Mesh works in conjunction with the envoy proxy to manage and control the communication between microservices within a service mesh. Envoy Proxy enhances the capabilities of AWS App Mesh by providing the necessary networking and communication features for microservices to interact seamlessly and reliably. It abstracts away the complexities of network management, allowing developers to focus on building and deploying microservices while ensuring optimal performance and observability.
Envoy’s robust capabilities and compatibility with AWS App Mesh make it a crucial component in building resilient, scalable, and observable microservices architectures.
Service Discovery
Service discovery is a crucial concept in distributed computing and networking, particularly in modern application architectures like microservices and cloud-native environments. It refers to the automated process of locating and identifying services, resources, or components within a network or infrastructure. Service discovery enables seamless communication and interaction between different components of an application, even as they dynamically scale, deploy, or change.
AWS App Mesh enables automatic service discovery, allowing microservices to locate and communicate with each other without requiring hard-coded endpoint configurations. AWS App Mesh uses AWS Cloud Map under the hood to facilitate service discovery.
AWS Cloud Map is an Amazon Web Services (AWS) service that makes it easier for applications to find and connect to resources in the cloud. It acts like a map for your cloud resources, helping applications discover and keep track of them as they change.
AWS App Mesh Components
AWS App Mesh comprises several vital components:
- Mesh: A mesh establishes a virtual border for network traffic among the services contained within it.
- Virtual Nodes: These entities represent individual microservices within the mesh. Each virtual node comes with its own configuration for handling traffic, including routing rules, timeouts, and load balancing preferences.
- Virtual Routers: These serve as entry points for incoming traffic to the mesh and dictate how traffic is distributed among different virtual nodes.
- Routes: A route specifies how traffic is directed from a virtual router to a specific virtual node. It can be based on criteria like HTTP headers, paths, and more.
- Listeners: Listeners define the ports that the virtual router employs to accept traffic from clients.
- Backends: Backends correspond to external services or resources that microservices interact with.
- Virtual Gateway: It serves as the entry point for incoming traffic from external sources (such as clients or users) into the mesh, allowing for controlled and managed communication with the microservices residing within the mesh.
- Gateway Route (Ingress Traffic): a configuration element within the service mesh architecture that defines how incoming traffic from external sources is routed to specific microservices within the mesh. It allows you to control the flow of traffic and direct requests to the appropriate destination based on various conditions, such as URL paths, HTTP headers, or hostnames.
- Virtual Service: defines how traffic is routed and managed for a specific microservice within the mesh. A virtual service acts as an abstraction layer that separates the service’s logical identity from its underlying instances, allowing for flexible and controlled communication between microservices. Virtual services also define how external traffic from gateway routes is directed to specific virtual nodes.
Traffic flow through a Mesh
To employ AWS App Mesh, communication patterns and configurations can be outlined via the AWS Management Console or AWS SDKs. Virtual nodes, routers, routes, and listeners are defined based on the application’s architecture. Subsequently, the service mesh guarantees that traffic is routed and managed in alignment with our specifications.
Having gained a fundamental grasp of the distinct components and their respective roles, it’s valuable to delve into how these elements collaborate seamlessly as a cohesive unit.
Consider a scenario where customers (or users) want to view (using a browser) or retrieve (via API calls) the products that a company is selling and the prices of each product. The business hosts their applications on AWS and is leveraging AWS App Mesh and Elastic Container Service (ECS).
The example architecture in the figure below shows a microservices architecture for the use case under discussion.
The architecture in a nutshell:
- Two applications / microservices, namely the Products API and the Prices API, are hosted on ECS Fargate in the same (or different) cluster.
- There is a single service mesh created for the above microservices.
- The mesh contains a gateway route, virtual service, virtual router and virtual node for each microservice.
- There is another Envoy service running in ECS Fargate that serves as the entry point to the virtual gateway, thereby accepting traffic from external sources. This is fronted by an Application Load Balancer
The sequence of steps that unfold when a consumer initiates a call to an endpoint exposed by one of the microservices is as follows:
- The consumer (external) invokes the API endpoint https://api.myshop.com/v1/products to retrieve details of products sold by a company.
- The request is routed via Amazon Route53 to an Application Load Balancer (ALB) that fronts an envoy service running on ECS.
- The envoy service is responsible for routing the request to the virtual gateway of a service mesh. This behaviour is controlled by an environment variable in the task definition of the service. Refer to https://docs.aws.amazon.com/app-mesh/latest/userguide/getting-started-ecs.html and https://docs.aws.amazon.com/app-mesh/latest/userguide/envoy-config.html for details on the setup.
- The virtual gateway has gateway routes configured for each backend microservice that interfaces with external systems. These gateway routes govern traffic flow and guide requests towards the intended destinations, contingent on conditions like URL paths, HTTP headers, or hostnames. In this instance, let’s contemplate a gateway route that uses a rule to steer traffic based on URL path matching (prefix). Upon reaching the virtual gateway, each route is assessed to identify a match for the ‘v1/products’ path. If a match is found, the request is then directed to the virtual service linked to the gateway route. So, the request is now forwarded to the virtual service products-vs
- The virtual service, as already mentioned earlier, is an abstraction of the actual service running on ECS and hence now has to forward the request to the backend microservice. This is where the virtual router comes into play. Every Virtual Service is associated with a virtual router that routes traffic to a virtual node (in this case, products-vr and products-vn respectively).
- The virtual router has one or more routes that dictate which virtual node the request needs to be routed to. Once the request reaches the virtual router, every route configuration is looked up, and traffic is diverted to its associated virtual node. In this instance, let’s assume that there is only one route in the virtual router (products-vr), and hence all traffic is diverted to the virtual node (products-vn).
- The ‘products-vn‘ virtual node acts as a logical representation of a single microservice, application component, or instance within the service mesh. In this case, it corresponds to the Products API (or microservice) that runs on Amazon ECS. To determine the destination for incoming requests, the virtual node uses AWS Cloud Map for service discovery.
- The Products API Service on ECS has a task comprising two containers: the application and envoy. The envoy container is configured to listen to traffic on a specific virtual node (which is done using environment variables specified in the task definition) and acts as a proxy for the application container. As soon as the envoy container receives the request from the virtual node, it forwards the request to the application container on the loopback address and the port that the application exposes (i.e., 127.0.0.1:8080/v1/products). The application then responds to the API call with the requested data.
- The consumer (external) invokes the API endpoint https://api.myshop.com/v1/prices to retrieve price details. The request flows through the same mesh but via different routes, services, and nodes to reach the backend Prices API.
Benefits of using App Mesh
AWS App Mesh offers an array of features to enhance microservices communication and management:
- Traffic Control: App Mesh empowers you to govern and regulate traffic flow between microservices. You can establish advanced routing strategies such as percentage-based traffic splitting and canary deployments, facilitating seamless updates and minimising risk.
- Service Discovery: App Mesh incorporates built-in service discovery, allowing microservices to locate and communicate with each other without relying on hardcoded endpoints.
- Load Balancing: It provides load balancing mechanisms to evenly distribute incoming traffic across multiple instances of a microservice.
- Observability: App Mesh integrates seamlessly with AWS CloudWatch and AWS X-Ray, granting deep insights into the interactions between microservices, which aids in issue identification and troubleshooting.
- Security: The service enables end-to-end encryption and enforces communication security among microservices through the utilisation of Transport Layer Security (TLS) certificates.
- Platform Neutrality: AWS App Mesh isn’t restricted to any specific runtime or programming language, ensuring compatibility with diverse application stacks and platforms.
Summary
AWS App Mesh is a robust solution tailored to handle the intricate aspects of communication and administration within contemporary application structures. Its comprehensive array of tools and capabilities empowers developers to construct, deploy, and expand applications while guaranteeing smooth and dependable interactions among microservices. The key elements, encompassing Virtual Nodes, Virtual Routers, Routes, and Virtual Services, offer precise control over traffic routing, load balancing, and observability.
Integration with AWS CloudWatch and AWS X-Ray further bolsters the capacity to monitor, diagnose, and optimise microservices’ performance. Through service discovery, secure communication features, and load balancing, AWS App Mesh streamlines the intricacies linked to networking, granting developers the liberty to concentrate on building robust applications.
As contemporary application architectures continue to evolve, AWS App Mesh remains an essential tool for shaping effective, resilient, and scalable microservices-driven applications. It effectively tackles communication and management challenges, all while upholding a notable level of control and transparency.