AWS re:Invent 2023: Adam Selipsky Keynote

It only feels like yesterday we were getting up at 3am to watch along as Adam took us on a year in review trip down AWS’s growth and ambition plans. Fast forward to today and we were back at it again in the wee hours of the morning.

This year’s keynote was much simpler, more direct, definitely more nerdy and was a subtle shift from previous years’ formats. Gone was the arbitrary themes that they tried to squeeze the announcements into, and in its place, a simple message – “AWS constantly reinvents technology”. Reinvention was the overall theme of the keynote, but this year they really let their nerd out and go deep into some of the topics.

After all the standard pleasantries, a reminder that AWS is the biggest, most secure and most stable platform, we started to dig into this “Reinvent” theme. For once it was an oldies but a goodie that got the early stage – Amazon S3 – after a quick walk down memory lane, we jump straight into the first announcement.

Launching - Amazon S3 Express One Zone

This feature release of S3 is designed to increase performance and reduce latency in the consumption of S3 objects stored at what looks like a trade off of resiliency. 

From the blog post announcing its release (

The new Amazon S3 Express One Zone storage class is designed to deliver up to 10x better performance than the S3 Standard storage class while handling hundreds of thousands of requests per second with consistent single-digit millisecond latency, making it a great fit for your most frequently accessed data and your most demanding applications. Objects are stored and replicated on purpose built hardware within a single AWS Availability Zone, allowing you to co-locate storage and compute (Amazon EC2, Amazon ECS, and Amazon EKS) resources to further reduce latency.

As well as increased performance, access costs are 50% lower than S3 Standard, making this a valuable asset in the tool bag of anyone doing substantial S3 data processing.

Adam didn’t stop there and less than 20min into the Keynote it was time for the next release.

Launching - AWS Gravitron4 (preview)

This is the fourth generation of AWS’s custom ARM based chips since launch in 2018, with 2 generations in the past 2 years.

The Launch post mentions ( chip specs as follows:

96 Neoverse V2 cores, 2 MB of L2 cache per core, and 12 DDR5-5600 channels work together to make the Graviton4 up to 40% faster for databases, 30% faster for web applications, and 45% faster for large Java applications than the Graviton3.

New - R8g Instanced for EC2

The new Gravitron4 chips will be deployed within the new R8g instances in multiple sizes with up to triple the number of vCPUs and triple the amount of memory of the 7th generation (R7g) of memory-optimized, Graviton3-powered instances.

AWS has been constantly Reinventing technology, providing the broadest and deepest functionality, Enabling the fastest pace of innovation, Providing the most secure and reliable cloud and all while supporting the largest community of customers and partners.


Our first two announcements, while important were just the entree course, with Adam really warming into the key message when he starts to explore the “Generative AI Stack.”

Over the remainder of the Keynote, we’ll explore these three layers of the Generative AI Stack.

  1. Applications that leverage FMs
  2. Tools to build with LLMs & Other FMs
  3. Infrastructure for Training and Inference

And like all things AWS, we’ll start from the infrastructure layer.  It’s during this section where we see a departure from recent years keynote, Adam welcomes Jensen Huang to the stage, co-founder and CEO of NVIDIA. 

We get 10 mins of pretty nerdy GPU talk – if that’s your jam, make sure to watch the replay. The key message here was that NVIDIA has some awesome cluster tech that allows them to start measuring things in zettaflops.  Combining NVIDIA’s GPU tech with AWS’s Nitro backplane allows the creation of “ultra-clusters” to tackle the largest AI training activities.

Once we bid thank you to Jensen, Adam is straight back into the next announcement.

Launching AWS Trainium2

Trainum2 is 4x faster than the previous generation, producing 65 exaflops of on-demand supercomputing performance.

Closing out this layer of the Gen-AI stack, we get a review of AWS Neuron, the SDK for interacting with ML toolchains like TensorFlow and PySpark. This brings us up to the Tooling Layer of the Gen AI stack and we start exploring the breadth of Bedrock toolchain.

Bedrock provides a diversity of model providers, from 3rd party models such as Claude or Llama2 – through to AWS provided models such as Amazon Titan.  To highlight the breadth of the eco system, Adam invites his second guest to the stage Dario Amodei – CEO and co-founder of Anthropic.

Here again, we are treated to a nerdy wordsmith of all things in the AI/ML space – they made sure to hit all the trigger words along with ensuring that all of this tech is “Safe” and its use is “Responsible” when we heard them talk about regulation we knew this section was coming to a close and they’d hit bingo on the AI/ML checklist.  

But don’t let my snark take away from a good 10 min where you had a great view into how AWS is powering these businesses with flexible and available compute options.

Once we’ve heard from the industry on their models, it’s time to review the AWS models – with an overview of the Titan stack.

Tidal Models are trained and delivered by the AWS team, providing a broad set of general models for multiple use cases. The setup here is that beyond the base models AWS are about to announce 3 new features in Bedrock that will enable organisation to create much more customised responses based of their own information.

Bedrock announcement #1 - Fine Tuning

Fine Tuning allows you to provide additional training parameters to existing models to customise them to your own needs

The AWS announcement post goes into detail on the benefits of fine-tuning –

With fine-tuning, you can increase model accuracy by providing your own task-specific labeled training dataset and further specialize your FMs. With continued pre-training, you can train models using your own unlabeled data in a secure and managed environment with customer managed keys. Continued pre-training helps models become more domain-specific by accumulating more robust knowledge and adaptability—beyond their original training.

Bedrock announcement #2 - Retrieval Augmented Generation

This feature looks really interesting, with this the pre-trained models are augmented with your own data on the retrieval path – this allows for the injection of customer or localised content where responses are being formed.  The new Knowledge base features of Bedrock enable secure access for these models to your company data.

Announcement post –

Bedrock announcement #3 - Continued pre-training for Titan Text Lite & Express

Feature is in preview today, expect to see more on this tomorrow and throughout the rest of re:Invent week.

While we are in the Bedrock layer the hits keep on coming – it’s no good just having these models, we need the ability to interact with them, and so we see the announcement of:

Launching - Agents for Amazon Bedrock

This is a LowCode solution to enable quick access to producing agents that can leverage the foundation models and access relevant data sources to provide quick agent development.

This is the kind of solution that I’m really interested in doing, to enable access to less technically proficient teams to begin to experiment what’s possible in agent creation.

And to round out the Bedrock layer, we need to discuss Job 1 – Security.

Bedrock itself has a pretty impressive solid base for security, customer data is kept in the customer’s hands, trained models are copied into the customer domain, and all data is encrypted at rest and in transit.

What’s even more impressive is when you connect it to the existing AWS control plane, and leverage things like CloudWatch and CloudTrail – you have a strong baseline for compliance and security leveraging Bedrock.  

What I found even more impressive is that Bedrock is also SOC 1, 2 & 3 compliant.

All of this is reinforcing AWS’s commitment to “Responsible AI”.

This leads straight into the next announcement in the Bedrock stack.

New - Guardrails for Amazon Bedrock (preview today)

Guardrails for Amazon Bedrock exists to promote safe interactions between users and your generative AI applications by implementing safeguards customised to your use cases and responsible AI policies.

From the announcement post:

With Guardrails for Amazon Bedrock, you can consistently implement safeguards to deliver relevant and safe user experiences aligned with your company policies and principles. Guardrails help you define denied topics and content filters to remove undesirable and harmful content from interactions between users and your applications. This provides an additional level of control on top of any protections built into foundation models (FMs).


With that, we end the second layer of the Generative AI Stack, and take time out to hear from Lidia Fonseca from Pfizer on their journey to the cloud.

An amazing effort to move 80% of their workloads to the cloud, in just 42 weeks  – this migration comprised 12,000 applications across 8,000 servers.

Pfizer also leveraged the power of AWS to scale up compute requirements during 2020’s development of the COVID-19 vaccine, citing timelines for 269 days to get approval, down from the normal 8-10 years.  They also mentioned their investment in Gen AI has saved them $1 billion.

Post Pfizer, we get onto the 3rd and likely most interesting layer in the Gen AI landscape, the applications that leverage all of this platform.

First up, we get re-acquainted with CodeWhisper – the AI powered code suggestion tool that integrates directly with all modern IDEs and command lines and is touted to provide 20-40% productivity boosts for customers.

While AWS cite CodeWhisperer is free for “individual users”, note that the per developer licencing is ~$20/month if you need to connect to corporate services.

This is something that I have been putting off playing with for a while, so I think it’s time to dive in and do a deeper assessment of the productivity gains in this area.

We now head to the biggest announcement of the day – the one AWS were saving for Adam – Amazon Q.

Launch - Amazon Q (preview today)

Over most of the remaining time, Adam talks through Amazon Q and the variety of ways this solution is integrated into the AWS ecosystem.

The first aspect of Q is it being positioned as a Chat GPT killer, integrated into not only the AWS console but also all AWS documentation pages.

At 4:34 this morning, mere moments after this was announced, me (your eager fanboi) logged into our AWS account to enable it, and I was shocked to see that it was already there – announcing itself to me on the AWS dashboard.

Going one step further, as we continued to explore, we found Q was also enabled outside of the console on the AWS documentation sites.

Already this morning, we’ve had feedback from the team that this feature is rapidly assisting in searching through documentation and looking for specific details or answers.  Our experience has found it a tad slow at the moment, but I expect there are thousands of users currently exploring this released feature.

Beyond the Q agent integration with AWS properties, Q is shaping up to be a much larger toolchain of AI-powered solutions. Q’s angle is to assist in all aspects of workflow through the development and operations of solutions.

Launch - Amazon Q - Code Transformations

The next major feature announcement in the Q product suite is AI automation to assist with the upgrade of application codebases across language versions.  Currently only supporting Java, but Adam mentioned support for .NET applications coming soon.

A candidate customer (don’t remember the name off hand) has been trialing this, and performed over 1000 application upgrades in only 2 days.  I’ll be really interested to see how this lands in the real world, but it does feel like a great tool to speed up the leg work when it comes to maintaining application currency.

Next up, we look at the business angle, and how Amazon Q can be integrated into existing organisation data stores to provide contextual insights with minimal code.

We welcome Dr Matt Wood to the stage for a quick cameo – a very rushed cameo – to talk through how Q can build an agent in 3 steps that can integrate with all of the common organisational data repositories.

And as fast as we saw Matt, he was off again and Adam was talking about business intelligence.

Launch - Amazon Q for Quicksight (Available for preview today)

Q now extends into QuickSight too, to help you not only query data, but you can use Q to update the visuals and fill in the search queries.  I’ve been a big fan of QuickSight for a few years now, so I’ll be interested to see how Q can help the development and consumption within this toolchain.


So that completes the stack – the introduction of Q looks to unify the front door to the deep technology, taking on other Gen AI competitors in the consumption space.  I’d expect to see many more Q branded services over the coming months.


Let’s just skip over BMW talking about watching movies in cars – can’t see that one ending well, as we head into the home stretch now.

New - Zero-ETL integrations with Amazon Redshift

The closing section heads back down the stack into the data plane, and calls out a series of new “Zero-ETL” solutions integrating RedShift seamlessly with Aurora, RDS and DynamoDB.


We also get a second Zero-ETL announcement.

New - Zero-ETL integration with OpenSearch (from DynamoDB)

I’m sure there must be a market for this, but all I can think of is if you have enough data in DynamoDB that you need OpenSearch to search it for you, the storage costs of having it ETL’d into two places might also be a big concern.

Great to see feature development, but remember kids – all this stuff aint free.

That brings us to the last announcement of the Keynote – while Adam reminds us about Amazon DataZone, he also drops a new preview feature.

New - Amazon DataZone - AI recommendation (preview)

With that, we end the AWS announcements – but we are not quite done – sneakily Adam drops one last update – something more from the broader Amazon group.

New - Amazon Project Kuiper

Project Kuiper is the Amazon global satellite network aimed at providing internet coverage across the globe – launching mid 2024, Kuiper aims to assist in connecting the last mile to a global network via a satellite-based network. All of this will be integrated via both public and private links into AWS and other Internet-based services.

That's all folks...

In closing, there were a great deal of new features announced – I expect we’ll see more sessions and updates over the remaining keynotes and once some of the in-person sessions are held.

If you made it this far, thanks for reading, and your reward is this quick cheat sheet on this morning’s announcements.

  1. Amazon S3 – Express One Zone
    1. High performance and low latency for most frequently accessed data.
    2. Single-digit millisecond latency
    3. 10X faster than S3 Standard
  2. Graviton 4 (30% faster)
  3. Nvidia DGX cloud.
  4. AWS Trainum 2.
  5. Bedrock enhancements
    1. Fine-tuning
    2. RAG
    3. Continued pre-training
    4. Agents for Bedrock (GA)
    5. Guardrail for Bedrock
  6. Amazon Q
  7. Amazon Connect enhancements.
  8. Zero ETL integration with Redshift
    1. Aurora PostgreSQL
    2. RDS Postgres
    3. DynamoDB
    4. OpenSearch Service
  9. Amazon DataZone (AI Recommendations)
  10. Amazon Kuiper (Mid 2024)

Enjoyed this blog?

Share it with your network!

Move faster with confidence