Expose AWS Lambda functions to API consumers using Apigee Edge

 

 Apigee Edge is an easy-to-use but powerful API administration platform with capabilities including analytics and operational automation, as well as API developer management. Edge provides an abstraction or facade for your backend service APIs by fronting them with a proxy layer, which provides security, rate limiting, quotas, analytics, and more. The platform provides secure access to your services via a well-defined API that is uniform across all of your services, regardless of how they are implemented.

In this article, we will take a look at the different ways of exposing an AWS Lambda function as an API to consumers using Apigee Edge as a proxy or passthrough.
There are different Apigee products available and so wherever Apigee is mentioned in this article, it refers to Apigee Edge for Public Cloud (https://docs.apigee.com/api-platform/get-started/get-started ).

WHAT ARE THE DIFFERENT TERMS USED IN CONTEXT WITH APIGEE?

API Proxy: A core entity in Apigee that functions as a facade for APIs. An API proxy comprises a collection of configuration files, policies, and code.

API Consumer: System that uses the APIs exposed by the API Provider

Environment: API proxy runtime context

Policy: XML configurations that can be used within an API proxy to perform various actions when the API is being invoked (rate limiting, request manipulation, etc)

ProxyEndpoint: Defines the URL of the API Proxy on Apigee

TargetEndpoint: Defines the URL of the backend service to which requests have to be routed.

Reverse Proxy: A type of API Proxy that routes inbound traffic to a backend service.

No Target Proxy: A type of API Proxy that does not have a defined target.

Hosted Target Proxy: A type of API Proxy that routes traffic to a Node.js application deployed in Apigee.

WHAT IS AN API PROXY AND HOW DOES IT WORK?

The most important entity within Apigee is the API proxy. To understand how a proxy flow works, it is imperative to get an understanding of how an API proxy is created and what happens when a consumer invokes an Apigee endpoint.

An API proxy configuration comprises two definitions, ProxyEndpoint and TargetEndpoint. Consider these to be XML files with a series of steps about what needs to happen when an Apigee proxy URL is invoked. Whenever a proxy is created using Apigee Edge UI, the proxy creation wizard creates a default template using the details entered by the user.

You can follow the steps mentioned in https://docs.apigee.com/api-platform/fundamentals/build-simple-api-proxy to create a proxy using the UI.

Below is an example of a simple proxy that returns the IP addresses of the Apigee-hosted machine(s).

 

 

 

  • When a user or consumer invokes the url (https:///get_ip), Apigee checks if there is any proxy deployed with the /get_ip as the base path in its ProxyEndpoint definition.
  • If there is one, the request is routed to the specific proxy and the proxy endpoint flow begins.
  • If there are policies attached, they get executed sequentially.
  • Apigee looks at the and the defined in the ProxyEndpoint
  • In our case, the is set to default and hence the control flows to the TargetEndpoint definition that has the name ‘default’
  • Within the default TargetEndpoint definition, Apigee looks at the <HTTPTargetConnection> element and routes the request to the endpoint. The response from the target is then returned back to the client.
  •  

INTEGRATION PATTERNS

Before going into the different integration patterns, we should ensure that we have the following prerequisites covered.

  • A Lambda function in your AWS account
  • AWS user access keys.

1. Using Amazon Application Load Balancer as the target

The simplest, yet most expensive approach when it comes to invoking a Lambda function via Apigee is to use an AWS Application Load Balancer as the target for a proxy. The target group for the Application Load Balancer should be the Lambda function. The Load balancer can be exposed using either the DNS name or a route53 alias. This value can be used as the target endpoint while creating a “reverse proxy” on Apigee. Once the proxy has been created, the configuration would look like something shown below.

 

  

 

This approach may not be feasible if cost is a deciding factor. However, if you don’t mind the cost, then this would be a good pattern to adopt. There is also the flexibility to customise request headers (for e.g _X_AMZN_TRACE_ID) since the request will be routed through the load balancer.

2. Using Apigee Node.js Hosted Targets

Hosted Targets provide the flexibility to run a Node.js application within a runtime environment hosted by Apigee and expose it as a secured API. Usually, a “target” in Apigee is a backend system that is accessible via HTTPS. In the case of hosted targets, the “target” is a Node.js script which sits within an API proxy.

Getting an Edge API proxy to talk to a properly built and deployed Hosted Targets application requires a simple configuration in the proxy’s Target Endpoint. The Node.js script can make use of the in-built node modules as well as external packages (express for example). In our case, the Node.js script will utilise the aws-sdk module to invoke the lambda function.

Now, we need to use AWS access keys (access key Id and secret access key) to access AWS resources programmatically. Here comes the question of how best we manage the credentials. This is where Apigee Key Value Maps (KVM) come into picture.

A KVM is a collection of key/value pairs that are either stored as plain-text or encrypted strings. Creating a KVM is quite easy as shown in the video below.

https://www.youtube.com/watch?v=KeAbj4BYaa4

You can also refer to https://docs.apigee.com/api-platform/cache/creating-and-editing-environment-keyvalue-maps for in-depth information on KVMs

 

 

To get started, we first create a KVM as shown above and then a ”Hosted Target” proxy via the Apigee Edge UI. The proxy creation wizard creates a default template along with three files (app.yaml, index.js and package.json) that are required to deploy a Node.js application. The files can be modified with the below content

package.json

{
   "name":"hello-world",
   "version":"1.0.0",
   "main":"index.js",
   "scripts":{
      "start":"node index.js"
   },
   "author":"",
   "license":"",
   "description":"Hello World Application",
   "dependencies":{
      "aws-sdk":"2.959.0",
      "aws-xray-sdk":"3.3.3",
      "aws-xray-sdk-core":"3.3.3"
   }
}

index.js

var AWS = require('aws-sdk');
var http = require('http');
var lambda = null;

var server = http.createServer(function (request, resp) {

  if (request.method == 'POST') {
    if (!lambda) {
        var key = process.env.ACCESS_KEY; //Retrieve AWS Access Key from Env Variable
        var secret = process.env.SECRET; //Retrieve AWS Secret Key from Env Variable
        var region = 'us-east-1';
        AWS.config.update({accessKeyId: key, secretAccessKey: secret, region: region});
        lambda = new AWS.Lambda();
    }

    var body;
    request.on('data', buffer => {
        body += buffer.toString(); // convert Buffer to string
    });

    request.on('end', () => {
        console.log(body);
        var params = {
            FunctionName: 'test-function', //lambda function name
            InvocationType: 'RequestResponse',
            Payload: body
        };

        lambda.invoke(params, function (err, data) {
            if (err) {
                resp.end('Error: ' + err);
            } else {
                var json = JSON.parse(data.Payload);
                var status = json['status'];
                var payload = '';
                console.log('status : ' + status);
                if(status == 200){
                    resp.statusCode = 201;
                   payload =  JSON.stringify({ description : json['description']})
                }
                else {
                    resp.statusCode = status;
                    payload = data.Payload;
                }
                resp.setHeader("Content-Type", "application/json");
                resp.end(payload);
            }
        });
    });

server.listen(process.env.PORT || 9000, function() {
    console.log('Node HTTP server is listening');
});

app.yaml

runtime: node
runtimeVersion: 8
env:
  - name: NODE_ENV
    value: production
  - name: LOG_LEVEL
    value: 3
  - name: ACCESS_KEY //env variable name
    valueRef:
      name: aws-s3-credentials-encrypted //KVM name
      key: key //KVM key
  - name: SECRET
    valueRef:
      name: aws-s3-credentials-encrypted //KVM name
      key: secretKey //KVM key

As you can see, there is a reference to the KVM in the app.yaml and this is how the values from the KVM are set as environment variables. The access keys are retrieved from the environment variables when index.js is invoked.

Once the KVM and proxy are created, this is how the configuration will look like on the Apigee Edge console

The presence of an empty HostedTarget tag indicates that the proxy has no target and instead routes the request to the Node.js application that is deployed to the Hosted Targets environment. The files in the resources section are bundled and deployed as the Node.js application when this proxy is deployed to an environment.

This approach can be used if you want more control over the lambda execution flow and to perform pre and post processing of the request and response without using too many Apigee policies. However, do bear in mind that deployment of Node.js hosted targets takes a while and this depends on the size of the package. Moreover, there are also restrictions on the number of hosted targets that can be deployed in an Apigee organisation.

3. Using the Apigee Lambda Extension

The most recommended method to invoke a Lambda function is to make use of Apigee’s AWS Lambda Extension. The extension is intended to streamline the process of interacting with a lambda function, thereby eliminating the hassle of creating and maintaining code scripts and KVMs. All the heavy lifting is done by the extension itself and all one has to do is just install and deploy the extension to an Apigee environment. The deployed extensions are utilised within a proxy using a Service Callout Policy. 

Installing and deploying an extension to multiple environments is a straightforward process and can be easily done by following the steps provided in the Apigee Edge documentation.

Installing an extension https://docs.apigee.com/api-platform/extensions/configuring-an-extension

Deploying the AWS Lambda Extension https://docs.apigee.com/api-platform/reference/extensions/aws-lambda/aws-lambda-extension-100

Once the extension is installed and deployed, we can go ahead and create the API proxy. In this case, we can create an Apigee “no-target” proxy. The reason behind this is because normally an extension callout or service callout is used to invoke an external service and perform some action before the request is routed to a target endpoint (for example. upload some data to S3 or logging to datadog before hitting the target API). However, in our case, the target is a Lambda function and Apigee does not allow a Lambda function name or ARN to be specified as the target endpoint. So, a few tweaks will have to be made to return the response from the lambda function back to the API consumer.

To achieve this integration, Apigee policies can be added to the proxy after it is created.

Connector Callout Policy

An Apigee policy to perform a callout to an external service. In our case, the policy is used to perform an invoke action on the lambda function using the lambda extension.

<?xml version="1.0" encoding="UTF-8"?>
<ConnectorCallout async="false" continueOnError="false" enabled="true" name="InvokeLambda">
   <DisplayName>InvokeLambda</DisplayName>
   <Connector>lambda_extn</Connector>
   
   <Action>invoke</Action>
   <Input><![CDATA[{
          "functionName" : "test-function",
          "invocationType" : "RequestResponse",
          "logType" : "None",
          "qualifier" : "$LATEST",
          "payload" : {request.content}
        }]]></Input>
   <Output parsed="false">lambdaResponse</Output>
   
</ConnectorCallout>

 

Javascript Policy

A policy to parse the response from the lambda function. The response from a lambda is in a stringified format and needs to be parsed to get a well-formatted JSON string. The reason this is being done is because we want to extract certain attributes from the response.

Consider that the lambda function returns the following response

“{\n\t\”status\”: \”201\”\n \”description\”: \”Request received successfully\”\n}”.

What we want to do here is parse the response, extract the value of the ‘status‘ attribute and set it into an Apigee context variable. This context variable can now be used in other policies in the same flow. The entire parsed response can also be set in another context variable.

<?xml version="1.0" encoding="UTF-8"?>
<Javascript async="false" continueOnError="false" enabled="true" timeLimit="200" name="parseLambdaResponse">
   <DisplayName>parseLambdaResponse</DisplayName>
   <Properties />
   <ResourceURL>jsc://parseLambdaResponse.js</ResourceURL>
</Javascript>
var response = JSON.parse(context.getVariable("lambdaResponse")); //get lambdaResponse from an apigee context variable
var payload = JSON.parse(response.payload); //parse the response object and extract the payload
var statusCode = payload.status; //get the value of the key 'status'

context.setVariable("statusCode", statusCode); //set the statusCode in an apigee context variable
context.setVariable("clientResponse", JSON.stringify(payload)); //set the response body in an apigee context variable
 
AssignMessage policy

This policy can be used to enhance the request or response. In this case, the policy can be used to return the well-formatted payload and an appropriate status code back to the consumer. The Apigee context variables created in the previous policy can be used here.

<?xml version="1.0" encoding="UTF-8"?>
<AssignMessage async="false" continueOnError="false" enabled="true" name="SetClientResponse">
   <DisplayName>SetClientResponse</DisplayName>
   <IgnoreUnresolvedVariables>true</IgnoreUnresolvedVariables>
   <Set>
      <Payload contentType="application/json">{clientResponse}</Payload>
      <StatusCode>{statusCode}</StatusCode>
   </Set>
</AssignMessage>

This is how the configuration will look like on the Apigee Edge UI after the proxy is created and new policies are added.

Invoking this proxy endpoint will result in all these policies being executed and will return a well formatted response and a HTTP 201 response code.

As mentioned above, this is the preferred approach at the moment as it does not involve writing and maintaining code snippets. The extension can also be reused by any proxy that requires this functionality thus reducing the need to maintain the Lambda invocation logic within individual proxies.

SUMMARY

Now that you have a basic understanding of the different ways to expose a Lambda function via Apigee Edge, you may want to think about which option fits your use case. This has to be done after considering the factors such as cost, ease of use and maintenance. I would personally recommend using the Lambda Extension approach because most of the work involved with the credential management and resource invocation is already handled and is abstracted from us.

If you want to learn more about Apigee Edge, check out the links and Youtube channel below.

https://docs.apigee.com/api-platform/get-started/tutorials 

https://www.youtube.com/playlist?list=PLIXjuPlujxxxe3iTmLtgfIBgpMo7iD7fk

Enjoyed this blog?

Share it with your network!

Move faster with confidence