Why cross-account messaging still challenges AWS architects in 2025
Cross-account messaging in AWS is one of those persistent challenges I’ve come across time and again, especially in multi-account environments. While Amazon MQ works seamlessly with AWS Lambda inside a single account, it doesn’t natively support cross-account message consumption, which makes reliable integration across teams, apps, and environments feel more complicated than it should be.
In this guide, I’ll walk you through how I solved this using Amazon MQ, EKS, and Lambda to build a secure, scalable, and real-time architecture. It’s designed to handle cross-account data flows with minimal friction, enabling you to sync applications and external platforms like Salesforce, without sacrificing security or performance.
Table of Contents
Why Lambda alone can’t solve it and what that means for your architecture
In multi-account setups, securely syncing data between AWS accounts is a challenge, especially when using Amazon MQ and AWS Lambda. While Lambda can easily consume messages from Amazon MQ in single-account setups, it does not support cross-account event source mappings for MQ. This creates the need for additional infrastructure to relay messages from the source account to the target account.
There are several ways to achieve cross-account message consumption such as using a consumer app, EventBridge, AWS PrivateLink, or cross-account IAM roles. For this solution, I focused on using a consumer app to read messages from the source MQ and republish them to a target MQ in another account. This approach helps bridge the gap and enables reliable message processing across accounts, ultimately facilitating seamless data synchronisation with external systems like Salesforce.
How we enable Real-Time, secure Cross-Account communication from the Merchant Portal
Let me give you the scenario that drove this solution.
We have a Merchant Portal application hosted in the Source AWS Account enables users to submit applications, which are stored in a backend database. Meanwhile, the Target AWS Account is responsible for processing these applications and updating their statuses in Salesforce, which serves as the single source of truth.
Given that Salesforce resides outside both AWS accounts, and Lambda cannot directly consume messages from a cross-account Amazon MQ, a robust and secure architecture is needed to:
- Capture application status changes in the Source AWS Account,
- Transfer these status updates securely to the Target AWS Account,
- Apply necessary business logic within the Target account, and
- Synchronise the updated application status back to Salesforce via a REST API.
Note: I leveraged the RabbitMQ engine on Amazon MQ as the core messaging infrastructure.
How to relay messages securely from one AWS account to another
Here’s the end-to-end message flow I built:
User (Merchant Portal) → Application Database → Amazon MQ (Source Account) → EKS Pod (Target AWS Account consume message from source MQ and Publish to target MQ) → Amazon MQ (Target Account) → Lambda Function → REST API → Salesforce
A step-by-step breakdown of the architecture
To enable seamless application status synchronisation across AWS accounts and integrate with Salesforce, I combined Amazon MQ (RabbitMQ), Amazon EKS, and AWS Lambda to bridge the gap where cross-account event source mapping is not natively supported.
Source AWS Account
Merchant Portal
- Users submit applications via a web interface.
- Application data is stored in databases (such as
- On status change, the portal publishes a message to Amazon MQ.
Amazon MQ (RabbitMQ)
- An exchange (e.g., Application.Status) is defined.
- Messages are published with a routing key (e.g., application.status.update).
Target AWS Account
EKS-based consumer (Relay service)
- Deployed in Kubernetes pod.
- Connects to the Source MQ using a temporary queue for secure, ephemeral message consumption.
- Binds the temporary queue to the source exchange (Application.Status).
- On receiving a message, it republishes to the Target MQ exchange (e.g., internalStatusExchange).
Amazon MQ (RabbitMQ) – Target account
- Hosts a persistent queue (statusSyncQueue) bound to internalStatusExchange.
Lambda Function
- Configured event source mapping to statusSyncQueue.
- For each message:
- Parses content
- Applies business logic
- Sends a REST API request to Salesforce to update application status
- Logs the outcome
- Sends failed messages to a DLQ or error queue for retry and analysis.
How the EKS relay works, startup scripts, message parsing, and routing
The EKS-based application includes two key components – a Consumer that pulls messages from the source MQ and a Producer that forwards them to the target MQ.
1. Startup script
A startup script is configured to run automatically when the EKS pod initialises. This script ensures the consumer begins listening as soon as the container is ready.
2. Consumer logic
- Connects to the Source Amazon MQ (RabbitMQ) using provided credentials and endpoint.
- Dynamically creates a temporary queue and binds it to a predefined exchange (e.g., Application.Status) with the appropriate routing key (e.g., application.status.update).
- Listens for incoming messages on this temporary queue.
- Upon receiving a message:
- Parses the message payload.
- Passes the message to the producer component for further handling.
3. Producer logic
- Connects to the Target Amazon MQ instance.
- Publishes the parsed message to a designated exchange (e.g., internalStatusExchange) using a defined routing key.
- Ensures messages are published reliably, with appropriate error handling and logging in place.
4. Environment configuration
The application uses the following environment variables to maintain flexibility across environments (dev/preprod/prod):
MQ Password variables are injected via Kubernetes ConfigMaps or Secrets to keep sensitive information secure.
Lambda processing and Salesforce integration
Since the Lambda function is subscribed to an Amazon MQ queue (RabbitMQ), it will be invoked automatically whenever new messages arrive in the queue.
Below is a sample Lambda function to handle and decode messages from RabbitMQ:
event.eventSource must be ‘aws:rmq’ for RabbitMQ event mappings. After decoding the message, you can implement any business logic such as:
- Calling a REST API
- Writing to a database
- Publishing to another service
How to simulate messages and validate end-to-end behaviour
You can test this Lambda function in two ways:
- Publish a message to the RabbitMQ queue directly via Amazon MQ
- Invoke Lambda using a sample payload
Here’s a sample event payload you can use to test:
- sample_queue is your queue name.
- ::/ represents the default RabbitMQ virtual host (/).
- The data field is base64-encoded JSON. Decoded data value
{
“key”: “value”,
“action”: “test”
}
The Lambda function will automatically decode and process this payload as shown in the sample logic above.
Why this architecture works
Since AWS Lambda does not support cross-account consumption from Amazon MQ, we use an EKS-based consumer as a relay mechanism. This consumer connects to the Source MQ via a temporary queue, reads messages, and republishes them to the Target MQ under a new exchange.
This relay pattern enables:
- Security boundary isolation between AWS accounts
- Fine-grained IAM roles for producer and consumer apps
- Resilience via container orchestration and retries
Once republished to the target queue, a Lambda function processes the messages and updates Salesforce via REST API.
This architecture not only ensures reliable and decoupled message flow but also aligns with security and compliance best practices for containerised workloads, as outlined our ‘CIS-Compliant EKS Release Pipeline‘ guide Enabling A CIS Compliant EKS Release Pipeline – Cevo, which is a valuable companion when deploying EKS-based services in production.
You can also checkout similar article written by AWS Using Lambda with Amazon MQ – AWS Lambda
Infrastructure as code and automation
All the above code can be deployed using Terraform, Ansible, or any other Infrastructure as Code (IaC) tool, which I will cover in Part 2 of this blog.
Want to eliminate cross-account integration roadblocks?
This architecture empowers you to confidently build secure, scalable, and event-driven systems across AWS accounts. By leveraging Amazon MQ, an EKS-based relay service, and AWS Lambda, you can overcome native cross-account limitations and keep critical systems like Salesforce in sync, reliably, securely, and in real time.
In Part 2, I will guide you through deploying this full architecture using Infrastructure with code, with Terraform and Ansible so you can automate it from end to end. To stay updated on the latest technical insights, architecture blogs, and hands-on tutorials including Part 2 of this series – follow us on LinkedIn here.
Rohit Gupta is a seasoned AWS Consultant with 15+ years in IT, specialising in cloud-native development, DevOps, and automation. He enjoys trekking, cooking, music, and travel and is passionate about AI, serverless, and building scalable cloud solutions.