Run AWS Lambda using custom docker container

I wrote about building and deploying a AWS Lambda using SAM CLI previously.

In this guide, we try to run a Lambda function inside a container.

Why run Lambda in Docker?

For a very simple reason, Lambda runtimes are standardised environments where you can only use what they provide and they do not provide a lot. E.g. if your application required any binary to be installed you coudn't do that on Lambda.

But in 2020 Re:invent, AWS launched Container Image Support for Lambda for container images up to 10 GB in size. While for this may not be important for "one-off functions", but for many use cases such as machine learning models etc, the developmental workflow typically includes Dockers and that is where it gets tricky deploying them to AWS Lambda.

Setting up the development environment.

You need Docker & VSCode to be installed on your system for this guide. Download fromt he provided links and install.

Then follow the following steps.

Step 1: Install Python using these instructions.

Step 2: Install AWS CLI

Step 3: Install SAM CLI

Step 4: Configure AWS & AWS CLI

Create a new app

Run the below in your terminal to create a new SAM application

sam init

This will start the interactive session to create your app. Choose Option as per below

Which template source would you like to use?
        1 - AWS Quick Start Templates
        2 - Custom Template Location
Choice: 1
What package type would you like to use?
        1 - Zip (artifact is a zip uploaded to S3)
        2 - Image (artifact is an image uploaded to an ECR image repository)
Package type: 2

Which base image would you like to use?
        1 - amazon/nodejs14.x-base
        2 - amazon/nodejs12.x-base
        3 - amazon/nodejs10.x-base
        4 - amazon/python3.9-base
        5 - amazon/python3.8-base
        6 - amazon/python3.7-base
        7 - amazon/python3.6-base
        8 - amazon/python2.7-base
        9 - amazon/ruby2.7-base
        10 - amazon/ruby2.5-base
        11 - amazon/go1.x-base
        12 - amazon/java11-base
        13 - amazon/java8.al2-base
        14 - amazon/java8-base
        15 - amazon/dotnet5.0-base
        16 - amazon/dotnetcore3.1-base
        17 - amazon/dotnetcore2.1-base
Base image: 4

Project name [sam-app]:

Cloning from

After that you will be prompted to choose application template, choose 1 - Hello World Lambda Image Example.

AWS quick start application templates:
        1 - Hello World Lambda Image Example
        2 - PyTorch Machine Learning Inference API
        3 - Scikit-learn Machine Learning Inference API
        4 - Tensorflow Machine Learning Inference API
        5 - XGBoost Machine Learning Inference API
Template selection: 1

    Generating application:
    Name: sam-app
    Base Image: amazon/python3.9-base
    Architectures: x86_64
    Dependency Manager: pip
    Output Directory: .

    Next steps can be found in the README file at ./twitter/

Understanding the SAM generated application template

First, go to the sam-app directory.

cd sam-app

You should see the following files

├── events
│   └── event.json
├── hello_world
│   ├──
│   ├──
│   ├── Dockerfile
│   └── requirements.txt
├── template.yaml
└── tests

Compared to the standard Lambda example, this has an additional file, the Dockerfile that contains the instructions to build the container where the lambda will be executed.


COPY requirements.txt ./

RUN python3.8 -m pip install -r requirements.txt -t .

# Command can be overwritten by providing a different command in the template directly.
CMD ["app.lambda_handler"]

The first thing you notice is, this image is building on top of an image from AWS's pubic container registry.

NOTE: You can also use non-AWS images such as those based on Alpine or Debian, however, the container image must include Lambda Runtime API. So if you use a non-AWS image, you will need to add them manually otherwise your app will not work.

And the final line is responsible for running the lambda_handler() function that in defined under twitter/hello_world/

Build the project

To build the app, run the following

sam build

You need Docker & Python3.8 to be installed for this to work

Sam build success

Test the build

To test if you application is working correctly, run

sam local invoke

Again, you need Docker & Python3.8 to be installed for this to work

Deploy the project

Now there are three more steps that needs to be performed, but in our case, SAM CLI will do them in one go.

These steps are (again, we don't need to do them if using SAM CLI) 1. We need to rename the tag of our docker container to push it to the repository. 2. Login from Docker CLI to ECR repository 3. Push the image to ECR repository

All we need to do now is

sam deploy --guided

This will start an interactive delployment session, choose options as below (blank means leave the defaults)

Configuring SAM deploy

        Looking for config file [samconfig.toml] :  Not found

        Setting default arguments for 'sam deploy'
        Stack Name [sam-app]: hello-world
        AWS Region [us-east-1]: 
        #Shows you resources changes to be deployed and require a 'Y' to initiate deploy
        Confirm changes before deploy [y/N]: 
        #SAM needs permission to be able to create roles to connect to the resources in your template
        Allow SAM CLI IAM role creation [Y/n]: y
        #Preserves the state of previously provisioned resources when an operation fails
        Disable rollback [y/N]: 
        HelloWorldFunction may not have authorization defined, Is this okay? [y/N]: y
        Save arguments to configuration file [Y/n]: 
        SAM configuration file [samconfig.toml]: 
        SAM configuration environment [default]: 

This will deploy your app to AWS.

Need Help? Open a discussion thread on GitHub.