Welcome to the Midnight Madness
As in past years, AWS kicked off the week that is re:Invent with their Midnight Madness session. #TeamCevo got talking with a number of non-US attendees who were not aware where this idea comes from. For others also wondering, after attending in 2017 I did some hunting and found out that Midnight Madness is an event celebrating the upcoming college basketball season. The tradition originated from basketball teams holding public practices at midnight on the earliest day that they were allowed a practice to be held.
The big announcement from Midnight Madness was the launch of Deep Composer (link https://aws.amazon.com/deepcomposer/) - a set of ML models built to extend and complement any input piece of music with additional instruments and accompaniment.
Along with the launch of the service, Dr Matt Wood announced a physical keyboard to be available soon - putting the power of the Deep Composer ML within reach of more people.
A summary of Monday’s announcements that stuck out include:
- EC2 Image builder
- Event Bridge - Schema Registry and Discovery
- Amazon Braket
- Access Analyser
The two interesting announcements for me here are the EC2 image builder and Amazon Braket.
EC2 Image Builder
Lots of talk is around Docker and Serverless deployment models - there is still a huge market for EC2 based applications. Everyone who’s been in the AWS ecosystem for a while has built their own EC2 based image pipeline - and it does seem weird that we are only getting this service today.
The EC2 Image Builder service allows you to create custom AMIs based off both Linux and Windows. It will automatically trigger new downstream builds when the base AMI changes - this saves lots of headaches either scheduling builds, or developing job-based triggers from the AWS supplied SNS topics.
Mutations to the AMI are managed via Build Components - there are a few components available at launch - but the real power comes as you can define your own components to add to your image through a simple YAML file.
At this stage the primitives for the Build Components are as follows:
Beyond the Build stage, it’s great to see that the Image Builder also supports a validation phase of the image build.
Additionally, you can now validate that your images are working - again there are a number of pre-supplied test suites, which can easily be extended for your specific needs through some YAML configuration.
There doesn’t look like there is CloudFormation support at this time, so expect some updates in the coming weeks for this to start being rolled out.
I can see a solid connection here for CloudFormation deployed Build and Test components being composed into a set of Image Builds to give a fully version controlled and composable approach to building images.
I must say that I have not followed the Quantum computing race - but with Google’s recent announcements in this space, it was obvious that AWS were going to follow suit.
The name Amazon Braket comes from the Bra-ket notation commonly used to denote quantum mechanical states (you learn something new every day).
The game changer here is that this unlocks access to Quantum computing for scientists, researchers, and developers to begin experimenting with computers from multiple quantum hardware providers in a single place.
Pick of today’s Sessions
With hundreds of sessions running each day, it’s really hard to know what will be great or what is just a sales tool - I did end up in a great session today run by the team from CLP (https://www.clp.com.hk/en). They are a large scale power business based out of Hong Kong. Pubudu and Di gave a great overview of the changing demands in the energy sector with the introduction of new sources of energy - and changing demands of customers.
The introduction of Wind and Solar, along with new heavy consumption demands from electric cars has created new challenges for power operators. No longer can they rely on purely statistical models for demand they need to have a greater visibility on what is coming in and going out of the grid.
They walked us through one of their solutions - a real-time data logging platform to feed information about household solar to operators and the home owners. Their platform is only just over a year old, and is a connection of home-grown IoT, Kinesis, Lambda and API Gateway solutions.
Even with a product as new as this, they are keeping their eye on the changing AWS landscape and have started to reduce their custom footprint by leveraging new AWS services to take the load off their team.
By using AWS IoT Core - they get a base framework including MQTT for store and forward of information that greatly decreases the complexity of their edge devices.
They’re also using Amazon Neptune to replace their hand managed Graph Database, and AWS AppSync and a new GraphQL API to greatly reduce their egress management.
This is a great story of why staying nimble in the cloud space, being open to changes and having a culture that is ready to adopt changes can drive a massive efficiency for businesses.
I’m looking forward to what Day 2 brings and the exciting announcements that will come with Andy’s keynote tomorrow morning.