TL;DR
Advanced prompt engineering for AWS isn’t about memorizing tricks, it’s about guiding AI to think step-by-step, learn from examples, iterate with you, and stay realistic within your project constraints. Techniques like chain-of-thought, few-shot learning, and iterative refinement can transform AI into a reliable consulting partner.
Table of Contents
Alright, so you’ve read my first post about Prompt Engineering for AWS: Why I Started Having Real Conversations with AI, and now you’re thinking “okay, what’s next?”
Fair question. Because once you get the hang of basic prompt engineering, you start wondering what else is possible. Can you get AI to think through problems step by step? Can you teach it by example? Can you have actual back-and-forth conversations that build on each other?
The short answer is yes. In this second instalment on prompt engineering for AWS, I’ll share some advanced techniques that work well when dealing with real world constraints
Advanced Prompt Engineering That Lifts Your Game
Getting AI to Show Its Work (Chain-of-Thought)
This one is simpler than it sounds. Instead of letting AI jump straight to conclusions, you ask it to walk through its thinking step by step.
Here’s why this matters: when I’m designing AWS architectures for clients, I need to understand the reasoning behind recommendations. If an AI suggests using ECS instead of EKS, I need to know why-because I’ll have to defend that choice in a room full of engineers who all have opinions.
The old way I used to do it:
Design a container orchestration solution for our microservices.
What I do now:
Walk me through your thinking process step by step:
1. First, what are the key factors I should consider for container orchestration?
2. Then, evaluate ECS vs EKS vs Fargate for my specific situation
3. Next, what are the trade-offs and risks of each option?
4. Finally, give me your recommendation with clear reasoning
My situation: 15 microservices, team of 4 developers with limited Kubernetes experience,
$3000/month budget, need to be production-ready in 8 weeks.
The difference is night and day. Instead of getting a generic “use EKS because it’s industry standard” response, I get a detailed analysis that considers my team’s skills, timeline, and budget constraints.
Pro tip: This technique is especially powerful when you’re dealing with complex AWS decisions where multiple services could work, but you need to pick the right one for your specific situation.
Teaching AI by Example (Few-Shot Learning)
This one took me a while to figure out, but once I did, it became one of my most-used techniques. Basically, you show the AI examples of what good output looks like, then ask it to follow the same pattern.
I discovered this when I was frustrated with inconsistent code reviews. Sometimes AI would give me detailed, actionable feedback. Other times, it would just say “looks good” or give me generic advice that wasn’t helpful.
Here’s how I solved it:
Here are examples of the kind of code review I want:
Example 1:
Code: [Simple function with a few issues]
Review:
- Security: Missing input validation (HIGH) - Add parameter sanitisation on lines 15-17
- Performance: Inefficient database query (MEDIUM) - Consider adding an index on user_id
- Style: Inconsistent naming (LOW) - Use camelCase consistently
- Next steps: Fix security issue first, then optimise query
Example 2:
Code: [More complex function]
Review:
- Architecture: Function doing too much (HIGH) - Split into separate validation and processing functions
- Error handling: No exception handling (HIGH) - Add try-catch blocks around database calls
- Testing: Missing edge case coverage (MEDIUM) - Add tests for empty input scenarios
- Next steps: Refactor architecture, then add error handling
Now review this code using the same format and level of detail:
[My actual code here]
The result? Consistent, actionable code reviews every single time. The AI learned the pattern and quality level I wanted.
Where this really shines: Documentation, code reviews, architecture assessments, and any task where you want consistent output quality.
Having Real Conversations (Iterative Refinement)
This might be the most important technique I’ve learned, and it’s also the most obvious one that everyone ignores: you can have actual conversations with AI.
Most people write one massive prompt, get a response, and then either use it or start over. That’s like having a conversation where you say everything you need to say in one breath, then hanging up the phone.
Instead, I treat AI interactions like consulting conversations. I start broad, then drill down into specifics based on what I learn.
Here’s a real example from last month:
Round 1:
I need to improve the performance of a web application that's getting slow.
What should I be looking at?
Round 2 (after getting the initial response):
Thanks for the overview. Let me be more specific about my situation:
- React frontend, Node.js backend, PostgreSQL database
- Response times went from 200ms to 3 seconds over the past month
- Traffic increased 50% but that doesn't explain the 15x slowdown
- The slowdown seems to happen during business hours (9am-5pm EST)
Based on this, what would you investigate first?
Round 3:
You mentioned database connection pooling. Here's what I'm seeing:
- Connection pool size: 10
- Peak concurrent users: ~500
- Database CPU usage spikes to 90% during slow periods
- Most queries are simple SELECT statements
Does this change your diagnosis? What specific steps should I take?
By the end of this conversation, I had a specific action plan that directly addressed my actual problem, not some generic performance optimisation checklist.
The key insight: Each response gives you information you can use to ask better follow-up questions. It’s like pair programming with an AI that has infinite patience.
Working Within Real Constraints
This is where most prompt engineering advice falls apart. Academic examples assume you have unlimited time, budget, and flexibility. Real projects don’t work that way.
I learned to be brutally specific about constraints upfront, because AI will give you theoretically perfect solutions that are completely impractical for your situation.
Example of what I mean:
Design a monitoring solution with these non-negotiable constraints:
Budget: $400/month maximum (this is firm, not a suggestion)
Timeline: Must be operational in 3 weeks (we have a compliance audit)
Team: 2 part-time DevOps engineers who are already overloaded
Skills: Strong with AWS basics, limited experience with Kubernetes
Existing setup: ECS services, RDS databases, some Lambda functions
Compliance: Must meet SOC 2 requirements (we can't compromise on this)
Don't give me the "ideal" solution. Give me the best solution that actually works within these constraints.
This approach has saved me countless hours of back-and-forth where AI suggests solutions that sound great but are completely unrealistic for the actual situation.
Advanced Prompt Engineering: When to Use Which Technique
After some time of experimenting, here’s my practical guide:
Use chain-of-thought when: – You need to understand the reasoning behind recommendations – You’re making decisions that you’ll need to defend to others – The problem has multiple valid solutions and you need to pick the right one
Use few-shot learning when: – You want consistent output quality – You have examples of what good looks like – You’re doing repetitive tasks that need to follow a pattern
Use iterative refinement when: – The problem is complex or poorly defined – You’re not sure what you don’t know – You want to explore different approaches
Use constraint-based prompting when: – You have real-world limitations (budget, time, skills, compliance) – You’ve been burned by impractical recommendations before – You need solutions that actually work in your environment
The Real Secret: It’s About Communication, Not Tricks
Here’s what I wish someone had told me when I started getting serious about prompt engineering: the advanced techniques aren’t magic spells. They’re just better ways to communicate what you actually need.
The best prompt engineers I know aren’t the ones who memorise complex frameworks. They’re the ones who are really good at breaking down problems, asking the right questions, and having productive conversations.
If you can explain a technical problem clearly to a junior developer, you can write effective prompts. If you can have a productive conversation with a consultant, you can use iterative refinement. If you can give good examples during code reviews, you can use few-shot learning.
The AI part is just the delivery mechanism. The hard part-and the valuable part-is knowing what questions to ask and how to ask them.
Resources That Actually Help
AWS Stuff
- Advanced prompt engineering with Amazon Bedrock – The official AWS guide (it’s actually pretty good)
- Amazon Q Developer docs – How to get the most out of AWS’s AI assistant
- AWS AI/ML Blog – Where AWS publishes the good technical content
Cevo Resources
- Our data & AI work – How we help organisations implement AI solutions
- AWS partnership – Why we’re an AWS Premier Tier partner
- Our blog – More insights from the team
Ready to put advanced prompt engineering for AWS into practice?
At Cevo, we help organisations unlock the full potential of AI tools like Amazon Q and Bedrock to solve real cloud consulting challenges faster, smarter, and within your constraints. Explore our Data & AI solutions or reach out to see how we can help with your next AWS project.
Ian is a passionate and enthusiastic Technology professional with vast experience in the development and management of Infrastructure hosting on-prem and cloud.