TLDR: re:invent 2019 was an evolution, no revolution. ML and IoT offerings still improving at pace. Continued move towards Containers and Serverless over managing servers.

At DiUS, the #cloud channel is one of the most vibrant. It’s the fifth most popular; behind #pact-sass, a product play so obviously plenty of chat, and #afl that seems to bring the Melbourne office to a halt every Monday during footy season.

That our slack posts and channel choices reflect who we are as an organisation is not that surprising. What’s that they say about DevOps engineers? 98% posting memes in slack, 2% coding? #serverless #toomuchtime #notenoughwork. 

So what better way to ask what our team thought of the 77 product launches, feature releases and services announcements at AWS re:Invent 2019. High fives to Gerd Wittchen, Ken Ong, Mick Reidy, Erik Danielson, Stephen Bartett, Duy Tin Troung, Zoran Angelovski, Thom Joy, Warner Godfrey and Shahin Namin for the slack chat.

The continuing growth of serverless

In this day and age, we feel you should not be asking: why serverless? You should be asking: why not serverless? A fair few things to love here, all new releases that reinforce and extend the ability to run operations on large datasets in a serverless fashion:

  • Fargate and EKS integration: Kubernetes has become the default container orchestration tool but managing a cluster is still a challenge. This is a positive move to reducing the friction to getting started, especially in places that don’t want ECS lock-in.
  • Finally, HTTP API release for your Lambda functions along with Lambda Concurrency and  RDS Proxy helped improve in one swoop the developer experience, cost and performance of API Gateway applications. Now you have the ability out of the box to better manage the dreaded cold start or pesky database connections.This really addresses some long-term gripes  with this stack around bursty or unpredictable workloads and makes it much more attractive going forward. 
  • IAM Access Analyser allows organisations to inspect the access controls and security of their key resources (S3, KMS, SQS, IAM and Lambda). By using automated reasoning to exhaustively test access patterns against resource policies to prove that the policies meet the organisation’s security posture with regard to external access. We’re excited by the capability of the IAM Access Analyzer to continually scan resource policies as they change over time to ensure that sensitive data is not accidently exposed.  
  • Athena support for user defined functions is another big step in making ETL on AWS a truly serverless option. Allowing large data transformations with minimal overhead. We have started adopting Athena into our toolset and this will just reinforce that.

Machine learning / Internet of Things goldrush

With demand spiking for our specialist services in ML/IoT, this area is always of particular interest. Our team grouped the re:Invent announcements into:

Democratising AI

The release of five AWS AI services that don’t require ML expertise are no surprise, since AWS has been very clear they want to make the power of AI/ML available for any company and any customer to use. And we agree, at DiUS we are a fan of automating ‘all the things’ so we can start delivering value as soon as possible. 

Our team greeted some of these AI services with great interest, like Amazon Fraud Detector. The premise of a service underpinned by an algorithm trained on the dataset generated by Amazon’s behemoth e-commerce machine is attractive. 

And a vertical NLP offering such as Amazon Transcribe Medicalthat provides real-time speech-to-text transcription is a step in the right direction. We’ve seen a massive explosion in the demand for conversational AI within our clients, and being able to fast track the development in more specialised markets, like healthcare, will increase the applicability of these offerings. 

But Amazon CodeGuru, not so much for our team. “If you need a code review tool to tell you to do aws sdk pagination properly… “ Ken Ong, DiUS DevOps specialist. 

Make AI Great Again

On the other end of the spectrum, the swathe of Sagemaker announcements signal more support for specialists to use the Amazon ML suite for custom model development and helping organisations deploy to production and support them there after. With a high rate of failure in ML projects we think this is a smart move. 

Up to now we’ve felt the premise of Sagemaker being a one stop shop for all your AI needs was a good idea but still a bit lacking in the wild. Our ML specialists are composed of cloud engineers that have transitioned to ML. as well as PhD-holding research scientists. So the feels are strong about the Sagemaker tooling. While there’s still some way to go, these announcements were a very positive step in the right direction. 

  • There are many influential factors when it comes to training a machine learning model. Among them are the hyperparameters which can have dramatic effects on the final performance of the model.  Sagemaker Debugger allows us to see how the model evolves by saving snapshots of the model along with loss values, the weights and their gradients. We’re excited to see how Sagemaker Debugger will help us better diagnose problems such as vanishing gradients during the training of the model. Also, although this is not the focus of Sagemaker Debugger, saving the model snapshots makes us capable of having a more high performant model by an ensemble approach on top of different model snapshots.
  • It is very likely that the data characteristics change throughout the life of a model. These gradual or immediate changes are known as “data drift” and might be the result of an update to the data pipeline or the inherent characteristics of the data and its source. SageMaker Model Monitor provides an easy way to monitor the data being passed by ML models. It allows us to detect data drift by keeping the data schema and their statistical characteristics, and also permits defining dashboards and alerts which help us to take proactive actions.  
  • We are not seeing a massive need for SageMaker Processingright now, but admit there are some niche workflows out there that this will be useful for.

Edge Processing for the win

AWS Wavelength, has great potential for edge processing for the IoT and VR/AR applications we build.  And we can’t wait to get our hands on it to road test. Designed to let developers build super-low-latency apps for 5G devices, it will enable applications that need real-time response, such as  driverless cars and other automated processes things, as well make those rich customer experiences based on AR / VR happen really quickly from anywhere. And of course, we are also excited about ML on the edge.

Some ‘other’ things that created some chatter

We’re still eagerly awaiting the release of AWS Control Tower release in the Sydney region.  Know it’s coming, but we are seeing huge demand for Landing Zone and Control Tower as a way to help our clients more quickly set up a new, secure, multi-account AWS environment. Compared to what we had to to two years ago, these services are a real game changer for organisations wanting to move fast.

And we do like the new Amazon Builders library. It’s a really good collection of thoughts and design patterns. A growing and living knowledge base is a welcome addition, particularly given the ever expanding number of things to keep across. More about how it’s structured on Jeff Barr’s blog post

Building quantum computers with practical benefits is still many years away—it’s certainly on DiUS’ Horizon Three—but there’s no doubt that it’s going to change the game in how we can tackle problems of a certain size and complexity. Here at DiUS we raised an eyebrow at the announcement of Amazon Braket and then lowered it again once we read the announcements for Amazon Quantum Solutions Lab and AWS Center for Quantum Computing. Good to see AWS providing the support that engineers and organisations will need to explore the potential of quantum computing.