Accelerate building agents for Amazon Bedrock using Powertools for AWS Lambda and Generative AI CDK constructs

What are Agents in Bedrock?

Amazon Bedrock Agents are generative AI applications designed to automate multistep tasks by seamlessly connecting with company systems, APIs, and data sources. These agents use the reasoning capabilities of foundation models (FMs) to break down user requests, gather relevant information, and efficiently complete tasks using the tools provided. By automating complex operational processes, Bedrock Agents free teams to focus on high-value work, enhancing productivity and innovation. 

Amazon Bedrock Agents can be applied across various business use cases to streamline operations and improve efficiency. One exciting use case is in Generative AI assisted code generation. Agents can be used to speed up the modernisation of applications by orchestrating repetitive activities like reviewing, refactoring, and unit testing huge legacy codebases into modern languages and frameworks. 

Steps to create a basic Bedrock agent

  1. Create an agent in Amazon Bedrock – To create an agent in Amazon Bedrock, you need to configure the agent’s purpose and select a foundation model (FM) that it will use to generate prompts and responses. This involves defining the agent’s instructions and setting up necessary components like action groups or knowledge bases.  
  2. Create an Action Group for the agent – Action groups define the specific actions the agent will perform. These actions are described using schemas that outline the parameters the agent needs to elicit from the user. 
  3. Define the OpenAPI Schema – The OpenAPI Schema (OAS) specifies the API operations the agent can invoke to perform its tasks. This schema includes the parameters required for the agent to interact with external systems. 
  4. Host the OAS in S3 – Once the OpenAPI schema is defined, it needs to be hosted in an S3 bucket. This allows the agent to access the schema and use it during its orchestration process. 
  5. Create Lambda function – A Lambda function is created to handle specific tasks within the agent’s workflow. This function processes input parameters and returns the necessary output to the agent. You will also need to write all the boilerplate code essential for integrating the Lambda function with Amazon Bedrock. This code ensures that the agent can invoke the Lambda function and handle its responses appropriately. 
  6. Don’t forget error handling, observability, configuration, and secrets – Implementing robust error handling, observability, configuration, and secrets management is crucial for maintaining the reliability and security of the agent. These aspects ensure that the agent can handle unexpected situations and maintain operational integrity. 

Performing these steps using the AWS Console, an approach commonly referred to as click-ops, is achievable if you know what you are doing. However, it is far from the recommended approach and not one you should be using within your organisation. It is less secure, harder to support, and not repeatable across different environments.  

Whether it is for our customers or for our own prototyping and learning, Cevo recommends using infrastructure-as-code (IaC) to repeatably provision, deploy, maintain, and tear-down AWS workloads. “But wait, isn’t that way too complex?” I hear you say. “Wouldn’t I have to know the specifics for all those AWS services, how they integrate, and how to secure them?” 

So, how can we have the best of both worlds? That is, use an IaC based approach to easily and repeatably build Agents for Amazon Bedrock and use existing, mature, toolkits to handle all the detailed boilerplate Lambda handler code so we just focus on the business logic of the solution. 

Let’s start by introducing some helpful frameworks and libraries. 

AWS Generative AI CDK Constructs

AWS CDK is an open-source software development framework that lets you define cloud infrastructure using familiar programming languages (like TypeScript, Python, Java, C#, and Go) instead of writing CloudFormation templates directly. 

CDK constructs are the basic building blocks of AWS CDK applications. They represent cloud components and encapsulate configuration details, making it easier to define and manage cloud infrastructure using higher-level abstractions. Constructs can range from low-level resources like S3 buckets to higher-level patterns that combine multiple resources. 

AWS Generative AI CDK Constructs provide high-level constructs, or patterns, to streamline the provisioning of Bedrock Agents and related resources. These constructs simplify the deployment process and ensure best practices are followed: 

  • Provision Bedrock Agents and Action Groups: L2 constructs in the CDK library allowing for easy provisioning of Bedrock Agents and their action groups. 
  • Deploy Lambda, S3, and IAM Roles: The constructs also support the deployment of Lambda functions, S3 buckets, and the necessary IAM roles and policies to support your AWS workload. 

AWS Powertools for AWS Lambda

AWS Powertools for AWS Lambda is a developer toolkit that simplifies the implementation of serverless best practices. It provides several helper libraries that can significantly reduce the complexity of building Bedrock Agents: 

  • Delegate Boilerplate Code: Powertools event handlers help delegate boilerplate code, allowing developers to focus on core logic. 
  • Generate OpenAPI Schema: Annotations in Powertools can be used to generate the OpenAPI schema automatically. 
  • Logging, Tracing, and Metrics: Powertools includes libraries for logging, tracing, and metrics, ensuring best practice observability. 
  • Configuration and Secrets Management: The parameters helper libraries in Powertools facilitate configuration and secrets management, enhancing security and maintainability. 
  • Note: Event Handlers for Agents for Bedrock is only available in the Python version of Powertools for AWS Lambda. It is not yet available in any of the other languages Powertools for AWS Lambda supports. 


In this post, we’ll demonstrate how these tools can speed up the creation of Amazon Bedrock Agents, reducing the need for boilerplate code while following best practices for provisioning AWS infrastructure and securely deploying and maintaining code in a repeatable manner.
 

Demo Walkthrough

Let’s say we want to build a code refactoring agent that automates code refactoring tasks by orchestrating interactions between foundation models, our company’s source code repository, test environment, and document repository. Building this out feels daunting, and it would be if we must write all the IaC, action group configuration, and boiler plate code, but let’s walkthrough the steps using the tools mentioned above that helps accelerate the build. 

Note: Keep in mind, our goal here is demonstrate how we can accelerate the building of Bedrock Agents. It is not how to build the most effective code refactoring agent. That is just our mock business process for this demo. 😊

Prerequisites

  • AWS Account 
  • AWS CDK 2.96.2 or later 
  • Node.js 18 or later (required by CDK even when using Python) 
  • Python 3.8 or later with pip and virtualenv 

Create the CDK Project

Create a new AWS CDK project in a new empty project directory.

				
					> mkdir blog-agents-for-bedrock-using-powertools-and-cdk 
> cd blog-agents-for-bedrock-using-powertools-and-cdk 
> cdk init app --language python 
				
			

Activate python virtual environment created by CDK.

				
					> source .venv/bin/activate 
				
			

Create a requirements.txt file in the root of the project.

				
					aws-cdk-lib>=2.176.0 
aws_cdk.aws_lambda_python_alpha 
aws_lambda_powertools>=3.5.0 
aws_lambda_powertools[tracer]>=3.5.0 
cdklabs.generative-ai-cdk-constructs==0.1.290 
constructs>=10.0.0,<11.0.0 
pydantic>=2.10.6 
pydantic[email]>=2.10.6 
typing_extensions>=4.12.2 
				
			

Install our dependencies using pip.

				
					> pip install -r requirements.txt 
				
			

Define our AWS Infrastructure

  • Remove or rename the generated directory the cdk init  process created. Mine was named blog-agents-for-bedrock-using-powertools-and-cdk. Create or rename to simply cdk in the root of your project. 
  • Create a new file named bedrock_agent_stack.py  and add the following code.
				
					from aws_cdk import (
    Stack,
)
from constructs import Construct

from aws_cdk.aws_lambda import Runtime
from aws_cdk.aws_lambda_python_alpha import PythonFunction

from cdklabs.generative_ai_cdk_constructs import bedrock

class BedrockAgentStack(Stack):

    def __init__(self, scope: Construct, construct_id: str, **kwargs) -> None:
        super().__init__(scope, construct_id, **kwargs)

        action_group_function = PythonFunction(
            self,
            "LambdaFunction",
            runtime=Runtime.PYTHON_3_12,
            entry="./src",  
            index="tools.py",
            handler="lambda_handler",
        )

        agent = bedrock.Agent(
            self,
            "Agent",
            foundation_model=bedrock.BedrockFoundationModel.ANTHROPIC_CLAUDE_HAIKU_V1_0,
            instruction="You are a helpful and friendly agent that answers questions about refactoring code.",
            code_interpreter_enabled=True,
            user_input_enabled=True,
            should_prepare_agent=True,
            force_delete=True
        )

        action_group: bedrock.AgentActionGroup = bedrock.AgentActionGroup(
            name="CodeRefactoringTools",
            description="Use these functions for code refactoring support",
            executor=bedrock.ActionGroupExecutor.fromlambda_function(
                lambda_function=action_group_function,
            ),
            enabled=True,
            api_schema=bedrock.ApiSchema.from_local_asset("src/openapi.json"),
            force_delete=True
        )
        agent.add_action_group(action_group)


				
			

Let’s step through what we added:

  • Python imports for Stack and L2 constructs of the AWS services we need to provision
    • Note we are importing the bedrock L2 constructs from the AWS Generative AI CDK Constructs module
    • We also import the Lambda Runtime and PythonFunction modules from the AWS CDK library
  • We then define our BedrockAgentStack as the deployable unit containing the AWS resources we need to provision. Under this stack we add:
    • PythonFunction configured to package our agent tools handler business logic. More details about the source code for the lambda function and the business logic we’ll implement is below.
    • Bedrock Agent configured to use the Anthropic Claude Haiku Foundational Model.
  • Next, we add an Agent Action group to invoke our business logic (PythonFunction) and specify the OpenAPI schema defining how the Agent should call the function and what it can expect in the response.
    • Note: We will generate the OAS from annotations in our business logic implementation and save it in src/openapi.json.

To add our BedrockAgentStack to our CDK infrastructure app, let’s replace the content of app.py in the root of our project with the following:

				
					#!/usr/bin/env python3
import os
import aws_cdk as cdk
from cdk.bedrock_agent_stack import BedrockAgentStack

app = cdk.App()

BedrockAgentStack(app, "BedrockAgentStack",
    stack_name="DemoBedrockAgentStack",
    env=cdk.Environment(
        account=os.getenv("CDK_DEFAULT_ACCOUNT"),
        region=os.getenv("CDK_DEFAULT_REGION"),
    )
)

app.synth()

				
			

Develop our Agent Tools Lambda Function

With the AWS Infrastructure done, let’s focus on the business logic. In our demo solution we need to provide the Agent with the necessary tools to refactor our project source code. To keep the demo simple, this means it needs to discover what project files are available, get and set the contents of a source code, validate the refactored code, and a means to save any misc files (non-source code) it feels it needs to generate.

In the next section we’ll add the following actions that can be called by our Agent:

  • /list_project_source_files
  • /get_project_source_file
  • /set_project_source_file
  • /exec_sql
  • /save_misc_file

To create the Agent Tools Lambda Function:

  • Create a new directory at the root of the project called src.
  • Create a new python file named tools.py.
				
					import os
import boto3
import json

from time import time
from datetime import date

from pydantic import EmailStr, FutureDate
from typing_extensions import Annotated

from aws_lambda_powertools import Logger, Tracer
from aws_lambda_powertools.event_handler import BedrockAgentResolver
from aws_lambda_powertools.event_handler.openapi.params import Body, Query
from aws_lambda_powertools.utilities.typing import LambdaContext

logger = Logger()
tracer = Tracer()

S3_BUCKET = os.environ["S3_BUCKET"]

# Init an event resolver for Amazon Bedrock Agents
app = BedrockAgentResolver()

@app.get("/list_project_source_files", description="Lists available source code files")  
@tracer.capture_method
def list_project_source_files(
    project: Annotated[str, Query(title="Project", description="The project to filter on when listing source code files.")],  
) -> Annotated[dict, Body(description="List of available source code files for the given project")]:   
    logger.append_keys(
        session_id=app.current_event.session_id,
        action_group=app.current_event.action_group,
        input_text=app.current_event.input_text,
    )
    
    s3_client = boto3.client("s3")
    paginator = s3_client.get_paginator('list_objects_v2')
    
    objects = []
    try:
        for page in paginator.paginate(Bucket=S3_BUCKET, Prefix=project):
            if 'Contents' in page:
                for obj in page['Contents']:
                    objects.append(obj['Key'])
        
        logger.info("list_project_source_files response successful")
        return {"files": objects}
    except Exception as e:
        logger.error(f"Error listing files: {str(e)}")
        raise

@app.get("/get_project_source_file", description="Fetches a given project source code file.")  
@tracer.capture_method
def get_project_source_file(
    project_file: Annotated[str, Query(title="Project file", description="The project file to fetch.")],
) -> Annotated[dict, Body(description="The source code for the given project file")]:
    logger.append_keys(
        session_id=app.current_event.session_id,
        action_group=app.current_event.action_group,
        input_text=app.current_event.input_text,
    )

    s3_client = boto3.client("s3")
    try:
        response = s3_client.get_object(Bucket=S3_BUCKET, Key=project_file)
        file_content = response['Body'].read().decode('utf-8')

        logger.info("get_project_source_file response successful")
        return {"file_content": file_content}
    except Exception as e:
        logger.error(f"Error fetching file: {str(e)}")
        raise

@app.post("/set_project_source_file", description="Saves the given project source code file.")  
@tracer.capture_method
def set_project_source_file(
    project_file: Annotated[str, Body(title="Project file", description="The project file to write to.")],
    project_file_content: Annotated[str, Body(title="Project file contents", description="The source code contents to write to the project file.")],
) -> Annotated[bool, Body(description="Successfully updated project source code file")]:
    logger.append_keys(
        session_id=app.current_event.session_id,
        action_group=app.current_event.action_group,
        input_text=app.current_event.input_text,
    )

    s3_client = boto3.client("s3")
    try:
        s3_client.put_object(Bucket=S3_BUCKET, Key=project_file, Body=project_file_content)

        logger.info("set_project_source_file response successful")
        return True
    except Exception as e:
        logger.error(f"Error writing file: {str(e)}")
        raise

@app.post("/exec_sql", description="Execute the provided SQL statement against the specified database endpoint.")  
@tracer.capture_method
def exec_sql(
    database_connection: Annotated[str, Body(title="Database connection", description="The database connection to use to connect to the target database.")],
    sql_statement: Annotated[str, Body(title="SQL statement", description="The SQL statement to execute against the target database.")],
) -> Annotated[bool, Body(description="Successfully executed SQL statement")]:
    logger.append_keys(
        session_id=app.current_event.session_id,
        action_group=app.current_event.action_group,
        input_text=app.current_event.input_text,
    )

    # MOCK execution of the SQL statement
    logger.info({"database": database_connection, "sql_statement": sql_statement})

    return True

@app.post("/save_misc_file", description="Saves misc files generated as part of code conversion. E.g. conversion reports")  
@tracer.capture_method
def save_misc_file(
    project: Annotated[str, Body(title="Project", description="The project in which to save the file under.")],  
    file_name: Annotated[str, Body(title="Filename", description="The name of the file to write to.")],
    file_content: Annotated[str, Body(title="File contents", description="The contents to write to the file.")],
) -> Annotated[bool, Body(description="Successfully updated misc project file")]:
    logger.append_keys(
        session_id=app.current_event.session_id,
        action_group=app.current_event.action_group,
        input_text=app.current_event.input_text,
    )

    s3_client = boto3.client("s3")
    try:
        s3_client.put_object(Bucket=S3_BUCKET, Key=f"{project}/{file_name}", Body=file_content)

        logger.info("save_misc_file response successful")
        return True
    except Exception as e:
        logger.error(f"Error writing file: {str(e)}")
        raise


@logger.inject_lambda_context(log_event=True)
@tracer.capture_lambda_handler
def lambda_handler(event: dict, context: LambdaContext):
    return app.resolve(event, context)

if __name__ == "__main__":  
    print(app.get_openapi_json_schema())

				
			
  • Create a requirements.txt  file in the same location to instruct CDK to install and package our python dependencies. We only need to specify the lambda dependencies in this file, not all the project dependencies we specified at the root.
				
					boto3
botocore
aws_lambda_powertools>=3.5.0
aws_lambda_powertools[tracer]>=3.5.0
pydantic>=2.10.6
pydantic[email]>=2.10.6
typing_extensions>=4.12.2
				
			


Let’s step through the tools.py lambda handler in more detail to explain what we have done:

  • You’ll notice the first thing we did was bring in some imports and specifically the aws_lambda_powertools BedrockAgentResolver.
    • This resolver from Powertools for AWS Lambda handles all the boilerplate code for interaction between Amazon Bedrock and our lambda handler. This only line of code we need is to instantiate the resolver.

app = BedrockAgentResolver()

  • We then use annotations to describe our handler method signature, inputs, and outputs. This also requires the use of typing_extenions  and pydantic to further refine and help with the OpenAPI specification. For example,
				
					@app.get("/list_project_source_files", description="Lists available source code files")  
def list_project_source_files(
    project: Annotated[str, Query(title="Project", description="The project to filter on when listing source code files.")],  
) -> Annotated[dict, Body(description="List of available source code files for the given project")]:   
	# Implementation goes here

				
			
  • @app.get(…) annotation defines that this method is to be invoked using GET with the path /list_project_source_files. It also provides some description that the agent can use for matching.
  • project: Annotated[str, Query(…)]  defines an input parameter to be passed as part of the query string. Again, some added context is provided to help the agent determine suitable type and value when invoking.
  • Lastly, our return value is also annotated so the agent knows what to expect in return and what type the response will be in.

 

Tip: Working with Agents for Amazon Bedrock may introduce non-deterministic behaviour. Amazon Bedrock employs large language models (LLMs) to interpret and respond to user inputs. These models, trained on extensive datasets, can extract meanings from text sequences and understand word and phrase relationships. However, this capability means that identical inputs can yield different outputs, depending on the specific characteristics of the LLM in use.

The OpenAPI schema provides context and semantics to the Agent, aiding in the decision-making process for invoking Lambda functions. Sparse or ambiguous schemas can lead to unexpected results.

To help the Agent understand your functions and make accurate invocations, we need to enrich your OpenAPI schema with as many details as possible such as:

–  Always describe your function’s behaviour using the description field in your annotations.

– Update the description field to reflect any changes when refactoring.

– Use distinct descriptions for each function to ensure clear semantic separation.

 

  • Next, we implement the business logic for each of the agent actions we are enabling our generative AI agent to perform.
    • We again leverage Powertools for AWS Lambda to provide consistent logging, tracing, and metrics avoiding the need to re-invent this in every Lambda our team developments
  • Notice that the actual Lambda handler implementation at the bottom reduces down to simply:

				
					@logger.inject_lambda_context(log_event=True)
@tracer.capture_lambda_handler
def lambda_handler(event: dict, context: LambdaContext):
    return app.resolve(event, context)
				
			

 

    • Powertools for AWS Lambda has provided helper libraries for both the Bedrock agent boilerplate code and all the logging code we would have had to write ourselves. Less code means less bugs after all!
  • Lastly, we add some code that generates the OpenAPI specification Amazon Bedrock needs to invoke our lambda handler.

				
					if __name__ == "__main__":  
    print(app.get_openapi_json_schema())
				
			

 

  • To generate the OAS and save it to the location our CDK IaC is expecting, namely /src/openapi.json , run the following from the command line:

 

				
					python src/tools.py > src/openapi.json
				
			

Deploy to AWS

Hopefully we did not leave out anything and we are ready to deploy. To check, we follow the CDK lifecycle:

  • [DONE] cdk init
  • cdk synth or cdk diff
  • cdk deploy
  • cdk destroy

Let’s synthesise our infrastructure stack and package our lambda function to check if everything is ready for deployment.

> cdk synth


If all went well, you should see a CloudFormation template as output in the terminal and a /cdk.out  directory generated holding the generated IaC outputs.

Note: If you are making changes to an already deployed stack, you can use cdk diff to generate the IaC and compare it against the already deployed stack. This way you can verify only the intended changes will be deployed and that you have not inadvertently introduced unwanted changes to other parts of the stack.

Let’s move to the next step in the CDK lifecycle: deploy

  • [DONE] cdk init
  • [DONE] cdk synth or cdk diff
  • cdk deploy
  • cdk destroy

> cdk deploy


AWS CDK will now package up the generated IaC and code assets and upload them to a S3 bucket in your AWS account. This bucket was created as part of CDK bootstrapping. If that hasn’t been performed in your AWS Account and Region before, you may need to do that now. Just run cdk bootstrap for a default bootstrap of CDK into your AWS account.

Test

To test our simple code refactoring agent, we’ll need some example source code files to test with. In our demo solution we have provisioned an S3 bucket to hold the source files we need to refactor. So, upload some example source code files into the S3. In my test case, I want to test if our agent can refactor MSSQL stored procedures into PostgreSQL as part of a database migration project.

  • Note: copy the files under a folder that you’ll refer to as your “project folder” in the prompt to the agent. This just provides some isolation when asking the agent to save refactored files back to another “project”. It should see the convention and create a new project folder for the refactored code.

In the AWS Console, navigate to Amazon Bedrock and select Agents from the menu on the left.

Locate and select the Agent we just provisioned.

Click on the Test button and expand the Test panel so we can see the trace details more easily.

Enter an example prompt such as the below and click on Run.

				
					We are migrating the db-migration project from mssql to postgresql and need to convert all sql source code files into postgresql code. 
These new files need to be placed into a new project called db-migration-psql. 
To validate the conversion of sql source code works as expected it needs to be executed against the target database. 
At the end of the process, a conversion report needs to be generated, listing which source code files were converted and which failed.

				
			

You should start to see the agent reasoning what steps it needs to take and being executing those steps. Expand the trace steps to see how the agent is responding to your prompt.

When completed, the agent’s final response should be something like:

Expand the trace to see the rationale the agent comes up with and internal prompts used during each step of the orchestrated actions taken.

Check the S3 bucket for the refactored code and the generated completion report.

You can also view the CloudWatch Logs to see how the Bedrock Agent invoked the tools provided to it to execute the steps it planned. Look for entries containing the annotated method names of our tools lambda handler:

  • list_project_source_files
  • get_project_source_file
  • set_project_source_file
  • …and so on

Play around some more and see how the Amazon Bedrock Agent reasons out what steps, or actions, it decides to take to solve the business challenge presented in the prompt using the customised tools we have provided it in our solution above.

Clean up AWS resources

To clean up resources, perform the last step in the CDK lifecycle: destroy

  • [DONE] cdk init
  • [DONE] cdk synth or cdk diff
  • [DONE] cdk deploy
  • cdk destroy

> cdk destroy


This removes all the AWS services you deployed in the solution above, avoiding paying for unused resources.

Wrapping Up

By using AWS Powertools for AWS Lambda and Generative AI CDK Constructs, developers can accelerate the implementation of Amazon Bedrock Agents, reducing complexity and improving efficiency. These tools provide the necessary abstractions and best practices to build robust, scalable generative AI applications.

Useful Links

Enjoyed this blog?

Share it with your network!

Move faster with confidence