Writing Custom Resources in AWS with SAM

As an Advanced Consulting Partner, AWS is a core building block for a lot of what we do today. Recently I was asked to take part in a Developer Day talk on Serverless Application Model (SAM) and modern application development.

Given we only had a short amount of time to touch on this topic, I wanted to do a deeper dive on how you could use this tooling, specifically around building Custom Resources for CloudFormation (CFN). Later parts of this post also heavily use AWS Serverless Application Repository (SAR) to host our Custom Resources as an Application.

Recently we have been using a lot of the AWS Machine Learning services which often have limited or no CloudFormation support. Custom Resources allow for more sophisticated and consistent management of these services. You can provide custom provisioning logic into your templates. Lambdas are supported and a common approach to providing custom logic.

In this blog, I’m assuming some prior knowledge of the following:

  • AWS, in particular Lambdas, CloudFormation and Custom Resources
  • How to set up your Python environment, including a VirtualEnv and pip
  • The command line and bash
  • Preferably some exposure to SAM as well

I am going to share how I set up my development environment and deployment tooling to create more complicated custom resources using SAM.

Managing our custom resource

Often, for convenience, the Custom Resource Lambda is bundled into the consuming project and referenced locally from that CloudFormation stack. If that Custom Resource is only used in one place this is ok. But if you want to reuse this functionality across stacks you have to decide how to distribute the code. It also makes the consuming applications more complex. You may, for example, have the Custom Resource Lambda written in a different language to the core application and can place extra cognitive load on developers. This is more applicable in some of the newer AWS services, especially around Machine Learning such as Lex where the support is minimal or not a configuration you are happy with. You may also think about some of these techniques if you’re part of your company’s AWS Platform team and wondering how to manage your tooling.

The rest of this blog discusses some approaches to doing this. The first approach illustrated here is having a separately deployed Lambda that the consuming app can now consume by just referencing it. Then we will later extend this by using the SAR service and the Nested Stacks concept in CloudFormation to better distribute and package this Lambda for each application. The following diagram illustrates our first design.

So the steps will be:

  • Initialise our app and create the Lambda
  • Make it do something meaningful
  • Share it to our consuming application
  • Setup CI/CD tooling

Getting started with SAM

Open a terminal session with SAM configured. Before you start, you will need an S3 bucket to publish your application to. This will need some policies set per the SAM documentation. To create a new project, from the terminal run:

sam init

I provided the following options:

  • Use a Managed Application Template
  • A project name
  • python 3.7
  • pip
  • EventBridge Hello World template

Note: Currently I’m not aware of a template to do CloudFormation Custom Resources. But you can follow a tutorial here to create your own if you wish.

After init, you will get this structure

Deploy to AWS

We needed to strip out some of the default configuration. In template.yaml, remove the events node — it by default sets up some CloudWatch configuration. You are then ready to build and deploy. SAM applications have a three step lifecycle:

  1. Build — create a bundle with all our dependencies and artifacts
  2. Package — upload to S3 and transpose our SAM template into a CloudFormation one
  3. Deploy — deploy this to AWS

To build:

sam build
# save time with future builds with flags ‘-u — skip-pull-image’

For the curious, you can see all the artifacts bundled under the .aws-sam directory.

The package and deploy normally go together in my workflow, so I wrapped it in a shell script like so:

#!/bin/sh
defaultBucket=<insert here>
STACK_NAME=$1
BUCKET_NAME=${2:-$defaultBucket}
echo “Packaging…”
sam package -t template.yaml –output-template-file packaged.yaml –s3-bucket $BUCKET_NAME > /dev/null 2>&1
echo “Deploying…”
sam deploy –template-file packaged.yaml –stack-name $STACK_NAME –capabilities CAPABILITY_NAMED_IAM

view rawsam_deploy.sh hosted with ❤ by GitHub

If you log into the AWS Console you should now be able to see the CloudFormation stack and the Lambda deployed.

Handling custom resource events

All resources in CloudFormation have a defined lifecycle they need to respond to. AWS will pass different event types such as create, update and delete. We need to handle all of these via our custom resource. Fortunately there are a lot of open source and AWS provided libraries to help with this. In this example I have used: 

aws-cloudformation/custom-resource-helper

Copy the sample code into our handler (removing the previous logic there) or use your own logic.

from __future__ import print_function
from crhelper import CfnResource
import logging
logger = logging.getLogger(__name__)
# Initialise the helper, all inputs are optional
helper = CfnResource(json_logging=False,
log_level=’DEBUG’,
boto_level=’CRITICAL’)
try:
# Init code goes here
pass
except Exception as e:
helper.init_failure(e)
@helper.create
def create(event, context):
logger.info(“Got Create”)
# Optionally return an ID that will be used for the
# resource PhysicalResourceId, if None is returned an ID
# will be generated. If a poll_create function is defined
# return value is placed into the poll event as:
# event[‘CrHelperData’][‘PhysicalResourceId’]
#
# To add response data update the helper.Data dict
# If poll is enabled data is placed into poll event as
# event[‘CrHelperData’]
helper.Data.update({“test”: “testdata”})
# To return an error to cloudformation you raise an exception:
if not helper.Data.get(“test”):
raise ValueError(“this error will show in the cloudformation “
“events log and console.”)
return “MyResourceId”
@helper.update
def update(event, context):
logger.info(“Got Update”)
# If the update resulted in a new resource being created,
# return an id for the new resource.
# CloudFormation will send a delete event with the old id
# when stack update completes
@helper.delete
def delete(event, context):
logger.info(“Got Delete”)
# Delete never returns anything.
# Should not fail if the underlying resources are already deleted.
@helper.poll_create
def poll_create(event, context):
logger.info(“Got create poll”)
# Return a resource id or True to indicate that creation is complete.
# if True is returned an id will be generated
return True
def lambda_handler(event, context):
helper(event, context)

view rawhandler.py hosted with ❤ by GitHub

Running locally

Before we deploy, we’d like to see it working. SAM offers some great tooling now to invoke and debug lambdas locally. SAM applications need to be built with all of its new dependencies. Run this from the HelloWorld folder created by SAM after the initialisation.

pip install -r requirements.txt
pip install crhelper
pip freeze > requirements.txt

Custom Resources handle specific events from CloudFormation. SAM provides some easy to use generators to create fixtures for this and many other AWS Services.

sam local generate-event --help
sam local generate-event cloudformation create-request > events/create_event.json

And then invoke by

sam local invoke -e events/create_event.json

It has succeeded if you see a log message like (if you look at the handler code for create you will see where we write the log message):

[INFO] YYYY-MM-DDTHH:mm:ss.456Z 8e09d2a6–55f8–1716–05e5–29bb50e24a8d Got Create

You can easily make updates or delete events by simply modifying the ‘RequestType’ in create_event.json. Additionally, you can use the ‘env-vars’ flag to pass in environment variables to the Lambda specified in a json file. This is particularly useful  in a local environment when you need to set some debug params.

Accessing our custom resource from another project

While it is currently a somewhat no-ops, we can track our Lambda’s invocation via CloudWatch. Let’s link it to our consuming application. We are going to do this by exporting a reference to our Lambda. It is already by default outputted by the SAM script we generated. We can make the following modification to allow other stacks to import (significant changes in bold).

While it is currently a somewhat no-ops, we can track our Lambda’s invocation via CloudWatch. Let’s link it to our consuming application. We are going to do this by exporting a reference to our Lambda. It is already by default outputted by the SAM template we generated. We can make the following modification to allow other stacks to import (the significant addition being the ‘Export’ element).

Outputs:
 HelloWorldFunction:
   Description: “Hello World Lambda Function ARN”
   Value: \!GetAtt HelloWorldFunction.Arn
   Export:
     Name: \!Sub “$\{AWS::StackName\}-HelloWorld”

You will also probably need to add some policy to your Lambda. The Python library required some extra policy permissions. In the SAM template you can create a role like this:

HelloWorldFunctionRole:
Type: ‘AWS::IAM::Role’
Properties:
RoleName: “managed-cr-role”
Path: ‘/’
AssumeRolePolicyDocument:
Version: ‘2012-10-17’
Statement:
– Effect: Allow
Action:
– sts:AssumeRole
Principal:
Service:
– ‘lambda.amazonaws.com’
Policies:
– PolicyName: “LogPolicy”
PolicyDocument:
Version: 2012-10-17
Statement:
Effect: Allow
Action:
– ‘events:*’
– ‘logs:CreateLogGroup’
– ‘logs:CreateLogStream’
– ‘logs:PutLogEvents’
Resource:
– !Sub “arn:aws:events:${AWS::Region}:${AWS::AccountId}:rule/MyCustomResource*”
– !Sub “arn:aws:logs:${AWS::Region}:${AWS::AccountId}:log-group:/aws/lambda/*”
– !Sub “arn:aws:logs:${AWS::Region}:${AWS::AccountId}:log-group:/aws/lambda/*:*”
– PolicyName: “AddPermission”
PolicyDocument:
Version: 2012-10-17
Statement:
– Effect: Allow
Action:
– lambda:AddPermission
Resource: “*”

view rawpolicy-template.yaml hosted with ❤ by GitHub

Your policy will obviously be different in your own context. And then add the following to your Lambda config in the same file:

DependsOn:
 - HelloWorldFunctionRole
Properties:
 Role: !GetAtt HelloWorldFunctionRole.Arn

Go through the build, package and deploy steps again. In the AWS console, CloudFormation should now display an export value for this stack:

CloudFormation stack exports

This means in our consuming application’s template.yaml (assuming you called the original custom resource stack my-custom-stack) we can do:

Resources:
 MyCustomResource:
   Type: Custom::MyCustomResource
   Properties:
     ServiceToken: !ImportValue 'my-custom-stack--HelloWorld'

Now when you create your new application it will create ‘MyCustomResource’ using our HelloWorld Function. To watch this in action  in the Console go to our HelloWorld Function in Lambda (you can navigate there from the CloudFormation of the SAM custom resource we deployed) and look at the CloudWatch logs under monitoring. If everything has worked, you should see a log message with a similar message we had when we invoked locally.

Automating this process via CI/CD

Now you can deploy locally, but we want to manage our custom resource like any other software product and that requires automated testing and first a CI/CD pipeline.

I am using the popular CI/CD tool circleci (there are a number of other great options including AWS Code Pipelines), it has a concept of Jobs tied together via Workflows. I have added a sample build:

# Python CircleCI 2.0 configuration file
#
# Check https://circleci.com/docs/2.0/language-python/ for more details
#
version: 2.1
jobs:
build:
docker:
– image: circleci/python:3.7
working_directory: ~/repo
steps:
– checkout
– run:
name: run tests
command: |
python3 -m venv venv
. venv/bin/activate
pip install tox
tox
– store_artifacts:
path: htmlcov
deploy:
executor: aws-serverless/default
steps:
– checkout
– aws-serverless/install
– run:
name: deploy to AWS
command: |
bin/deploy
orbs:
aws-serverless: circleci/aws-serverless@1.0.1
workflows:
version: 2
build-deploy:
jobs:
– build
– deploy:
context: <your aws context>
requires:
– build
filters:
branches:
only: master

view rawcircleci_config.yaml hosted with ❤ by GitHub

Note the use of executor. This is using a powerful CircleCI feature called orbs that vendors can create to provide functionality and even opinionated workflows for their products. AWS have some excellent ones and here I have used aws-serverless.

So now we have all the tooling necessary to create a process resembling a modern software release lifecycle now. We can even add unit tests and other automation techniques quite easily (SAM with Python uses pytest by default).

Better Application Lifecycle Management

Assuming you got all this to work, we are now sharing our Custom Resource across multiple projects. However, all consuming applications are now tightly bound to the lifecycle of our SAM Lambda application stack. We also have to manage the lifecycle of this SAM application itself. What if we could offload this management and allow each application to manage updates at the time of choosing? This would help consuming applications to better manage their release cycle and reduce our overhead.

Another possible problem is that our one Lambda has to share any state — say an S3 endpoint — across many consuming applications. We already see this with the CloudWatch logs being part of the SAM Application lifecycle in instead of the consuming application. One solution would be to deploy our Custom Resource Lambda multiple times for each consumer, but this is creating more management and complexity.

To address this, we will use AWS Serverless Application Repository (SAR) in conjunction with CloudFormation’s nested stacks concept. Nested stacks allow you to create stacks as part of other stacks. This allows you to create common snippets to describe common patterns in your infrastructure. Your platform team can create, share and update these and client applications will be able to take advantage of them in their own lifecycle.

SAR is a mechanism for publishing our serverless applications to be consumed by others whether in your account(s) or cross-account.

These techniques combine to allow us to instantiate our Lambda in the lifecycle of the consuming CloudFormation stack. The diagram below illustrates the design.

Nested application using AWS Serverless Application Repository

Our software lifecycle will change slightly:

  • Build our application (requires changes to our SAM template)
  • Publish instead of deploy command
  • Embed the application in our consuming application

Create a SAR application

SAR applications need to be published and SAM provides the suitably named publish command to do this including versioning. To see all the options run

sam publish --help

To make our serverless application publishable, we need to make some changes to our template. Add the Metadata to your template,yaml root with these fields:

In preparation of this, create an s3 bucket with this policy or you can apply this command:

aws s3api put-bucket-policy — bucket <YOUR BUCKET> — policy file://policy.json

Where policy.json is the same as in the link (basically giving the SAR service permission to read from this bucket). We now need to change the SAM template. I’ve attached an example with the key bit being the metadata entry.

AWSTemplateFormatVersion: ‘2010-09-09’
Metadata:
AWS::ServerlessRepo::Application:
Name: my-app
Description: hello world
Author: Elliott Murray
Transform: AWS::Serverless-2016-10-31
Description: >
sar_app
Sample SAM Template for sar_app
# More info about Globals: https://github.com/awslabs/serverless-application-model/blob/master/docs/globals.rst
Globals:
Function:
Timeout: 3
Resources:
HelloWorldFunction:
Type: AWS::Serverless::Function # More info about Function Resource: https://github.com/awslabs/serverless-application-model/blob/master/versions/2016-10-31.md#awsserverlessfunction
Properties:
CodeUri: hello_world/
Handler: app.lambda_handler
Runtime: python3.7
Outputs:
# ServerlessRestApi is an implicit API created out of Events key under Serverless::Function
# Find out more about other implicit resources you can reference within SAM
# https://github.com/awslabs/serverless-application-model/blob/master/docs/internals/generated_resources.rst#api
HelloWorldFunction:
Description: “Hello World Lambda Function ARN”
Value: !GetAtt HelloWorldFunction.Arn
HelloWorldFunctionIamRole:
Description: “Implicit IAM Role created for Hello World function”
Value: !GetAtt HelloWorldFunctionRole.Arn

view rawtemplate_sar.yaml hosted with ❤ by GitHub

And then run build, package and publish (instead of deploy) and list:

sam package -t template.yaml --output-template-file packaged.yaml --s3-bucket $BUCKET
sam publish -t packaged.yaml — semantic-version 0.0.1
aws serverlessrepo list-applications # returns app id’s
aws serverlessrepo list-application-versions --application-id <APP_ID>

The list applications method should return something like this that you use in the versions method:

{
 “Applications”: [{
   “ApplicationId”: “arn:aws:serverlessrepo:<ARN_DETAILS>”,
   “Author”: “…”,
   “CreationTime”: “…”,
   “Description”: “hello world”,
   “Labels”: [],
   “Name”: “my-app”
 }]
}

Record the application id as we will need to embed this in our application.

Note: I have provided a semantic version following SemVer and that is for you to manage in your CI/CD pipeline. Build numbers can be a useful tool here with a manual bump of major or minor.

Nested applications

CloudFormation allows us to embed stacks. This is achieved by the type: AWS::Serverless::Application type. Here follows an example that,

Resources:
 helloworld:
   Type: AWS::Serverless::Application
   Properties:
     Location:
       ApplicationId: arn:aws:serverlessrepo:<region>:<account-number>:applications/my-app
       SemanticVersion: 1.0.4
     Parameters:
       SomeParameter: YOUR_VALUE

I’ve left a parameter in the AWS::Serverless::Application node as an example of how to pass variables through to your Lambda application if needed. Then in our custom resource, in the same template we can add:

MyCustomResource:
 Properties:
   ServiceToken:
     Fn::GetAtt:
       - helloworld
       - Outputs.HelloWorldFunction

In this case we didn’t need to export from the SAM template as we are nesting and it becomes available to the parent stack via its normal output.

Run the following command:

aws cloudformation deploy — template-file template.yaml — stack-name <stack_name> — capabilities CAPABILITY_NAMED_IAM CAPABILITY_AUTO_EXPAND

Extra capabilities are needed to expand nested stacks, hence the auto expand type attribute. When this succeeds you should see something like this in the console (note the Nested tag):

CloudFormation Console

To verify if it actually did work, click on the nested stack and click through the nested stacks resources to the built Lambda. You should then be able to, under monitoring, see that it has been invoked and that there is a corresponding log message.

Limitations with this approach

There are some limitations to this. SAR does not actually provide support for SemVer in the SemanticVersion field. It is just a label. Be warned you can overwrite it, so a good release CI/CD process is recommended. I would recommend following something like a prod or latest label to the latest build as well as an incrementing version numbers following SemVer rules. This way, a consumer can just get the latest version or stick to a specific one if breaking changes. Also, some of the deployment and management tooling around SAR is still a little immature.

Conclusion

I have found that in creating workflows for my own needs, adding the combination of Nested Stacks, Custom Resources and SAR can be quite powerful. I have been using this to build more elegant workflows for managing some of the less supported AWS Services. I have often found these infrastructure tools do not get the same engineering disciplines as when we are building customer facing applications. But as illustrated, there is nothing to prevent you from applying the same techniques.

On a final note, I would like to see AWS invest more in the ideas behind AWS Serverless Application Repository. We have seen a boom in Serverless applications in recent years. But the distribution of them is still somewhat immature when compared to say, Docker. This investment would help mitigate some of this immaturity and support a community for content providers to better share value add products such as Custom Resources.

Want to know more about how DiUS can help you?

Offices

Melbourne

Level 3, 31 Queen St
Melbourne, Victoria, 3000
Phone: 03 9008 5400

DiUS wishes to acknowledge the Traditional Custodians of the lands on which we work and gather at both our Melbourne and Sydney offices. We pay respect to Elders past, present and emerging and celebrate the diversity of Aboriginal peoples and their ongoing cultures and connections to the lands and waters of Australia.

Subscribe to updates from DiUS

Sign up to receive the latest news, insights and event invites from DiUS straight into your inbox.

© 2024 DiUS®. All rights reserved.

Privacy  |  Terms