re:Invent 2020 wrap-up: Andy Jassy Keynote

Perhaps foolishly, I woke up at 2:30am to watch Andy Jassy’s keynote for re:Invent 2020; I’m the sort of person who doesn’t like spoilers, and I wanted to see it as live as possible. If you didn’t do the same thing, or if you haven’t been able to see the keynote yet, here’s my quick overview and wrap-up of the major themes and product launches.

Firstly, there were a few major themes running through the presentation: the first of these being  that AWS is the largest, fastest-growing, and most featureful cloud provider out there, and that they’re continuing to accelerate; the second, that the time to start adopting cloud is “as soon as possible”, as you’re leaving competitive advantage on the table if you don’t; and third, that your data is what it’s all about.

And now, on to the product announcements.

Compute

A slew of new EC2 instance types were announced, providing increased local storage (with the d3en type), MacOS EC2 instances (yes, really!), and a range of new Graviton2 (ARM)-based instances from the r6g (for high memory) to the t4g (burstable).

The general principle here is that if you have the opportunity to move from x86 to ARM, then you can save up to 50% on your EC2 instance costs; and if you don’t, updating to the latest instance families will get you better price-performance with no other changes.

AWS are also introducing new custom chipsets for machine learning (ML) workloads, with their new “Trainium” chip for training ML models (seriously, who names these?!), and a new instance type based on an Intel Habana Gaudi chipset coming soon.

Containers

Along with over 100,000 other AWS customers, I’m a great fan of their own container orchestration platform, the Elastic Container Service (ECS) and it’s great news that it’s now going to be available to run on-premises too! ECS Anywhere, alongside their Kubernetes orchestration system EKS which will also be available as EKS Anywhere, will allow businesses to run stable, mature container orchestration systems locally if they want. Even better, EKS will be open-sourced!

The AWS container registry, ECR, also gets a new “public registry” feature, allowing you to host container images and provide them to the public with friendly-named repositories. This is a response to the Docker Hub rate limiting announcement, I’m sure, and it’s a welcome one.

Serverless

AWS Lambda, the function-as-a-service service (?) gets a couple of incremental improvements, and a couple of what are frankly game-changers.

First, the move from 100ms to 1ms billing granularity means that short-running Lambdas could now cost much less; and the increase in available memory from 3GB to 10GB means that larger workloads can be run, since the added memory also comes with an increase of up to 6 CPU cores per Lambda.

The absolute killer here is the release of support for Docker containers (and other OCI-compatible containers) for Lambda, with a deployment size of up to 10GB/function! You still need to implement the Lambda runtime API (so you can’t bring a plain container and expect it to just work) but AWS have made base container images available for all their runtimes which already have that built in, so it really is just a matter of plopping your existing code in there and off you go. This is important, as there are already a lot of build and deploy pipelines set up to manage containerised workloads, and pointing these at Lambda as a fully container-on-demand service with very low startup latency is fantastic.

Microservice CI/CD

AWS Proton is a new service which allows “infrastructure” teams to design and construct deployment frameworks for “developer” teams to consume and deploy code in standardised ways with simplicity. If you think about “enabling constraints”, where innovation is fostered by removing choices, then this is a good example: developer teams don’t have to learn about how to implement infrastructure in order to take good advantage of it.

It’s not exactly a step away from DevOps, because a good integrated team can still make use of it to reduce their future heavy lifting and need for re-invention of wheels.

Storage

EBS volumes get some love here with the introduction of a new “general purpose” volume type, the gp3. These have much higher baseline performance (3,000 IOPS) than the previous generation without the need to over-provision capacity, plus the GB-month cost is lower. Moving existing volumes from gp2 to gp3 will see improvements in performance as well as providing the ability to right-size while still costing less for the same capacity!

io2 Block Express is a new SAN-type service for up to 64TB volumes with up to 256,000 IOPS at 4,000MB/s out of the box; if you have really high-throughput workloads, this is the solution for you. Coming soon features include multi-attach (so you can have more than one EC2 instance associated); IO fencing (eg for enabling clustering); snapshot restore, and support for Elastic Volumes (resizing on the fly).

Database

Aurora Serverless V2 is going to be a bit of a game-changer; the previous problems of Aurora Serverless were mainly around the time required to spin up (up to a minute to be ready), and the granularity of storage increases (doubling each time). V2 reduces the spin-up time to under a second, and allows you to specify capacity increases so you aren’t accidentally costing yourself twice as much by writing the one byte which tips it over the edge. It comes with Multi-AZ, Global Database, Read Replicas, Backtrack, and Parallel Query at launch.

The absolute killer app though is Babelfish for Aurora Postgres. If you’re considering how to move off your expensive SQL server licenses, but don’t have the time (or expertise) to rewrite all your applications, Babelfish provides a TDS/T-SQL interface that speaks the Microsoft SQLserver protocol and language on top of a real live PostgreSQL database that you can also  talk to as a Postgres database! This is going to save a lot of businesses a lot of time and money. Even better, it’s open source too.

Data Lake

AWS Glue is the service that allows you to build data catalogues from multiple sources, and the new Glue Elastic Views feature will allow you to build and maintain materialised views from constantly-changing data sources like Aurora, RDS, and DynamoDB and publish them into a range of targets including Redshift, S3, and Elasticsearch.

Machine Learning

A new addition to the SageMaker Studio suite is SageMaker Data Wrangler (which I initially and amusingly misheard as Gator Wrangler) allows you to aggregate and prepare ML features, recommend transformations, and continuously validate data; this is yet another advance in the set of tools and options available for ML on AWS, and goes even further towards making ML available to more people without the need for serious and deep AI/ML and data engineering skills.

Along with Gator Data Wrangler comes Feature Store, which provides a way to catalogue, organise, share, and find ML features for use in your models.

Next are SageMaker Pipelines, which is kind of CI/CD for ML: you can define end-to-end workflows (which are shareable) as templates and then execute them to provision and manage models.

Last but not least are a couple of welcome additions to CodeGuru: Python support, and Security Detector. The first is obvious, and the second looks for and reports on insecure coding practices. Incorporating these into build pipelines will allow for some powerful automated improvement suggestions to anyone, anywhere.

DevOps Tools

The new DevOps Guru (a name that I quite hate) isn’t actually a wise DevOps practitioner on the top of a mountain, but a service that examines your AWS account and reports on things like missing monitoring and alarms, warns about resource limits, under and over provisioned resources, memory leaks, and things like that. In spite of the name, we turned it on in one of our accounts and it immediately found a couple of tweaks we could make to improve things. The challenge here for older AWS environments will be managing the flood of information and prioritising the recommendations that it’ll produce.

Business Intelligence

The familiar Quicksight visualisation tool gets a new natural language interface, called simply “Q”. The star-trek themed name makes me wonder what would happen if you asked it for “tea, earl grey, hot” and whether or not Quicksight is in fact powered by dilithium, but that’s just my “I haven’t slept” whimsy at work.

Call Centre Tech

This gets a category all of its own, and it’s a biggie. Amazon Connect is the automated call-centre technology (and let’s be honest it’s pretty amazing) that gets some big boosts in this announcement: Connect Wisdom (which looks up data about products and services automatically based on the conversation, and provides that to the agent); Connect Customer Profiles (which provides agents timely data about a customer’s history and related activity during the call); Real-Time Contact Lens (which gives deep insights into sentiment analysis, suggestions for responses, etc); Tasks (to capture and manage follow-up tasks); and the absolute “my voice is my passport, verify me” of Voice ID which allows customers to be identified uniquely simply by speaking.

Internet of Things

The IoT space gets some love too, with the somewhat … dubiously named “MONITRON” (all bow before MONITRON!) which is a set of sensors that you can attach to real-world equipment to measure temperature and vibration. Combined with “Amazon Lookout for Equipment” (which is not, actually, a warning about something heavy falling on you) it can  provide predictive maintenance recommendations. In the physical process space, this has the potential to save large amounts both in unnecessary and too-late maintenance costs.

Next is another physical appliance called Panorama, which can take your existing video streams and apply computer vision to them to identify abnormalities and events which may require human intervention. You can build custom models in SageMaker as well if you want, or use built-in models for manufacturing, retail, construction, and so forth.

Edge Computing

Last year’s announcement of Outpost provided you with the option of having AWS built-and-maintained hardware in your own on-premises environment, provided you wanted a full rack; this year brings what I cheekily called “Outpost Mini”, which allows you to get Outposts in 1RU and 2RU sizes instead of having to dedicate a whole rack bay. These would be useful for remote areas, branch offices and the like.

Local Zones are spreading too, although only in the USA — 12 more across the continental USA. I’m waiting for one here in Melbourne (AU, not Florida) but I’m not holding my breath.

That’s a Wrap!

If you’ve made it this far, congratulations. The actual 3 hour event comes down to just a few pages in the end. If you’d like to read more about any of the services and features mentioned above, head over to https://aws.amazon.com/ or speak to us here at Cevo — we live and breathe this kind of thing, and would love to chat about how we can help you with any of this shiny new stuff (or the solid, business-value stuff that pays the bills too).

Stay tuned for our wrap of the next keynote announcements too: the Machine Learning Keynote, the Infrastructure Keynote, and the always-popular Werner Vogels Keynote.

Enjoyed this blog?

Share it with your network!

Move faster with confidence