In our series about building AWS APIs, we've covered a lot of ground around learning the AWS ecosystem. Now that we're all comfortable, it may be time to let everybody in on the world's worst-kept secret: Almost nobody builds architecture by interacting with the AWS UI directly. There are plenty of examples of how this is done, with the main example being HashiCorp: an entire business model based around the premise that AWS has a shitty UI, to the point where it's easier to write code to make things that will host your code. What a world.

In the case of creating Python Lambda functions, the "official" (AKA: manual) workflow of deploying your code to AWS is something horrible like this:

  • You start a project locally and begin development.
  • You opt to use virtualenv, because you're well aware that you're going to need the source for any packages you use available.
  • When you're ready to 'deploy' to AWS, you copy all your dependencies from /site-packages and move them into your root directory, temporarily creating an abomination of a project structure.
  • With your project fully bloated and confused, you cherry-pick the files to zip into an archive.
  • Finally, you upload your code via zip either to Lambda directory or to S3, only to run your code, realize it's broken, and need to start all over.

There Must be a Better Way

Indeed, there is, and surprisingly enough, the solution is 100% Python (sorry, HashiCorp, we'll talk another time). This "better way" is my method of leveraging the following:

Obligatory "Installing the CLI" Recap

Make sure you're using a compatible version of Python on your system. AWS is still stuck on version 3.6 at the time of writing (which has likely changed by the time you're reading this).

$ pip3 install awscli --upgrade --user

Install the AWS CLI via pip

💡
If you're working off an EC2 instance, it has come to my attention pip3 does not come preinstalled. Remember to run apt update && apt upgrade -y, followed by apt install python3-pip. You may be prompted to run apt install awscli as well.

Awesome, now that we have the CLI installed on the real version of Python, we need to store your credentials. Your Access Key ID and Secret Access Key are in your IAM policy manager.

$ aws configure
AWS Access Key ID [None]: YOURKEY76458454535
AWS Secret Access Key [None]: SECRETKEY*^R(*$76458397045609365493
Default region name [None]:
Default output format [None]:

On both Linux and OSX, this should generate files found under cd ~/.aws which will be referenced by default whenever you use an AWS service moving forward.

Set Up Your Environment

As mentioned, we'll use pipenv for easy environment management. We'll create an environment using Lambda's preferred Python version:

$ pip3 install pipenv
$ pipenv shell --python 3.6

Creating a virtualenv for this project…
Pipfile: /home/example/Pipfile
Using /usr/bin/python3 (3.6.6) to create virtualenv…
⠇Already using interpreter /usr/bin/python3
Using base prefix '/usr'
New python executable in /root/.local/share/virtualenvs/example-RnlD17kd/bin/python3
Also creating executable in /root/.local/share/virtualenvs/example-RnlD17kd/bin/python
Installing setuptools, pip, wheel...done.

Something you should be aware of when writing: Pip's latest version, 18.1, is a breaking change for Pipenvwe should first. Thus, the first thing we should do is force usage of pip 18.0 (is there even a fix for this yet?). This is solved by typing pip3 install pip==18.0 with the Pipenv shell activated. Now, let's get to the easy part.

python-lambda: The Savior of AWS

So far, we've made our lives easier in two ways: we're keeping our AWS credentials safe and far away from ourselves, and we have what is by far the superior Python package management solution. But this is all foreplay leading up to python-lambda:

$ pip3 install python-lambda

Installing python-lambda to your local project

This library alone is about to do you the following favors:

  • Initiate your Lambda project structure for you.
  • Isolate Lambda configuration to a config.yaml file, covering everything from the name of your entry point, handler function, and even program-specific variables.
  • Allow you to run tests locally, where a Deploying file simulates a request being made to your function locally.
  • Build a production-ready zip file with all dependencies completely separated from your beautiful file structure.
  • Deploying directly to S3 or Lambda with said zip file from the command line.

Check out the commands for yourself:

Commands:
  build      Bundles package for deployment.
  cleanup    Delete old versions of your functions
  deploy     Register and deploy your code to lambda.
  deploy-s3  Deploy your lambda via S3.
  init       Create a new function for Lambda.
  invoke     Run a local test of your function.
  upload     Upload your lambda to S3.

The --help menu of python-lambda

Initiate your project

Running lambda init will generate the following file structure:

.
├── Pipfile
├── config.yaml
├── event.json
└── service.py

Files generated by the actions taken thus far

Checking out the entry point: service.py

python-lambda starts you off with a basic handler as an example of a working project. Feel free to rename service.py and its handler function to whatever you please, as we can configure that in a bit.

def handler(event, context):
    """Entry Lambda Function."""
    # Your code goes here!
    e = event.get('e')
    pi = event.get('pi')
    return e + pi

Defining a Lambda handler() function

Easy configuration via configure.yaml

The base config generated by lambda init looks like this:

region: us-east-1

function_name: my_lambda_function
handler: service.handler
description: My first lambda function
runtime: python3.6
# role: lambda_basic_execution

# S3 upload requires appropriate role with s3:PutObject permission
# (ex. basic_s3_upload), a destination bucket, and the key prefix
# bucket_name: 'example-bucket'
# s3_key_prefix: 'path/to/file/'

# if access key and secret are left blank, boto will use the credentials
# defined in the [default] section of ~/.aws/credentials.
aws_access_key_id:
aws_secret_access_key:

# dist_directory: dist
# timeout: 15
# memory_size: 512
# concurrency: 500
#

# Experimental Environment variables
environment_variables:
    env_1: foo
    env_2: baz

# If `tags` is uncommented then tags will be set at creation or update
# time.  During an update all other tags will be removed except the tags
# listed here.
#tags:
#    tag_1: foo
#    tag_2: bar

Look familiar? These are all the properties you would normally have to set up via the UI. As a bonus, you can also store values (such as S3 bucket names for boto3) in this file. That's dope.

Setting up event.json

The default event.json is as simplistic as you can get. Naturally, this is not very helpful initially (it isn't meant to be). These are the contents of our example:

{
  "pi": 3.14,
  "e": 2.718
}

Setting values in event.json to assignee to Python variables passed to the Lambda

We can replace this with a real test JSON which we can grab from Lambda itself. Here's an example of a Cloudwatch event we can use instead:

{
  "id": "cdc73f9d-aea9-11e3-9d5a-835b769c0d9c",
  "detail-type": "Scheduled Event",
  "source": "aws.events",
  "account": "{{account-id}}",
  "time": "1970-01-01T00:00:00Z",
  "region": "us-east-1",
  "resources": [
    "arn:aws:events:us-east-1:123456789012:rule/ExampleRule"
  ],
  "pi": 3.14,
  "e": 2.718
  "detail": {}
}

Remember that event.json is what is being passed to our handler as the event parameter. Thus, now we can run our Lambda function locally to see if it works:

$ lambda invoke
5.8580000000000005

Run the Lambda with new values in event.json

Pretty cool if you ask me.

Deploy it, Ship it, Roll Credits

After you express your coding genius, remember to output pip freeze > requirements.txt. python-lambda will use this as a reference for which packages need to be included. This is neat because we can use Pipenv and its workflow's benefits while still easily outputting what we need to deploy.

We already specified which Lambda we will deploy to in config.yaml, we can deploy to that Lambda immediately. lambda deploy will use the zip upload method, whereas lambda deploy-s3 will store your source on S3.

If you'd like to deploy the function, run with lambda build which will zip your source code plus dependencies neatly into a /dist directory. Suddenly we never have to compromise our project structure, and now we can easily source control our Lambdas by leveraging .gitignore to our build folders while hanging on to our Pipfiles.

Here's to hoping you never need to repeatedly deploy Lambdas using any other method. Cheers.