If you are like me, you were starting to get comfortable with the idea of deploying your applications to cloud instances such as EC2s on AWS or droplets on DigitalOcean, when people started to shift away from cloud instances and embrace containers. Maybe now you are getting into containers and Docker, and as is to be expected, the tech world is making a move once again, this time to severless computing. Impossible to ever catch up, right?
In this article I'm going to tell you what a serverless architecture can offer you that the more traditional approaches cannot (and more specifically how it is possible to run your Python web applications without a server!). At the time I'm writing this, AWS has by far the most mature serverless platform, with the Lambda, API Gateway and DynamoDB triad of services at the forefront, so this is the platform I'm going to concentrate on.
Introduction to Amazon Web Services
I'm going to guess that you already know what AWS is, but just in case, AWS is a collection of services hosted by Amazon as a public cloud. A cloud is a type of service in which several applications share access to compute, storage and other resources. The AWS cloud is public because it runs on Amazon's own servers, which means that your applications share access to cloud resources with other AWS customers, obviously with fairly sophisticated access controls to prevent applications from others from interfering with your own. If that makes you nervous, then maybe AWS is not for you, and if you still want to get into cloud computing, instead you should look into private cloud options.
Modern clouds such as AWS, offer different ways to host applications. At the least involved level, you can create server instances, which are fully enabled virtual machines that run the operating system of your choice and are connected to the Internet. Once you have an instance up, you can login to it and install your software, exactly like you would on a local server. For AWS, this is the Elastic Compute Cloud (EC2) service.
The tendency, however, is to move towards a model in which developers only need to concentrate on writing their applications, leaving most or all of the administration tasks to the cloud operator. The use of containers was a step in that direction, by allowing you to create an image of your deployed application that you can then run anywhere. On AWS, the ECS service supports deployment using containers.
Your application will probably also need a database, static file storage, message queues, logging and more. AWS and most other cloud operators have you covered on that respect, as they also offer services in these and many other areas for a fully integrated experience.
Introduction to Severless and AWS Lambda
If you follow my tutorials, you have heard me talk about Heroku a few times. This is a so called platform as a service offering, or PaaS. With this service, you can upload your HTTP server application to Heroku using
git push. The instructions on how to start the server are given in a
Procfile. Heroku then runs your server and forwards HTTP requests to it.
I'm sure you realize that serverless computing is not about magically deploying applications without servers. The goal is to take the PaaS idea even further, by allowing you to just upload the application code to the service provider, without you having to run a server or listen to requests. When your code needs to run, the cloud provider simply calls the entry point function of your application. This type of service has been dubbed function as a service, or FaaS, because it even offloads the management of the server to the cloud provider, leaving you just with the task of writing the logic of your application.
When writing HTTP based APIs, however, this introduces a new paradigm, because we are used to launching a server that listens to client requests and dispatches them to the right place, and all of that is now gone. Instead, we have to write the application in a way that the cloud provider can invoke it as a function. I will discuss what this means in the context of Python WSGI applications later in the article, but for now, just know that this isn't the big problem it appears to be.
In AWS, Lambda is the function as a service offering. With this service, you can upload your Python, Node.js, Java or C# code and Lambda will store it and run it for you. To work with the Lambda service you upload your project packaged as a zip file, containing your own code plus any dependencies that are needed. You have to designate a function in your code as the entry point, and this function will be called by AWS when a client "invokes" your Lambda function.
There are many types of events that can trigger a Lambda function to execute, and this is one of the nicer aspects of this service. For example, you can configure your lambda to run on a schedule (cloud based cron jobs!), or when someone drops a new file on S3, which is the file storage service in AWS. Of course you can also trigger the function explicitly by using the Lambda service APIs, which makes it comparable to Celery in the sense that you can start an asynchronous task.
Unlike instances and containers, which require infrastructure to be up all the time and ready to accept requests, with Lambda you only pay for each function invocation, so when your function isn't running you pay nothing. For low to medium traffic you will likely end up paying very little.
Introduction to API Gateway
The API Gateway service allows you to construct API endpoints, and configure what actions these endpoints trigger when a client sends requests to them. The service takes care of scaling, rate limiting, and even authenticating your clients if you wish to go that far. An API Gateway endpoint can be configured to invoke a Lambda function, so you can create endpoints in API Gateway that match the endpoints in your API, and connect each of them to a Lambda function, and with this combination of API Gateway and Lambda your clients can work with your API in the normal way, by sending HTTP requests.
Introduction to DynamoDB
The serverless platform for AWS is often described as having three main services. I've discussed Lambda and API Gateway. The third one is DynamoDB, a simple NoSQL database service. Why is a NoSQL database so popular in the serverless world? Well, serverless applications tend to be built as a collection of small and independent pieces (the so called microservices), so while you can use a relational database, that is sometimes overkill. DynamoDB tables do not need a schema, are very easy to setup and use, and follow the "only pay for what you use" model present in many AWS services.
Going Serverless with your Python WSGI Applications
Your head is probably spinning by now, thinking how Lambda and API Gateway can host a simple API without doing a lot of work. We had to drop the idea of running an HTTP server, so on the way we also lost requests, responses, URLs, headers, cookies and a lot of other things that HTTP based APIs know and love, and instead now we have a Lambda function that we can invoke from one or more API endpoints hosted on the API Gateway service. How can Flask or other Python frameworks work in this environment?
I'm sure you know that in spite of being old, outdated and difficult to work with, the WSGI protocol is supported by most Python web frameworks and web servers. This standard allows servers such as gunicorn or uWSGI to serve Python applications written in frameworks as diverse as Flask, Django, Bottle and Pyramid. The web server does not need to know which framework an application was written on, because all these frameworks expose a WSGI compliant entry point that the web servers can invoke each time they receive a request.
The interesting thing is that API Gateway is, in many ways, doing the same thing web servers such as gunicorn and uWSGI do. When these servers receive a request from a client, they put all the data from the request in a Python dictionary and then pass it to the application's WSGI entry point. Similarly, when a client sends a request to an endpoint controlled by API Gateway, the service collects all the information about the request, and invokes our Lambda function with that. So what if the first thing our Lambda function does is take all that request information from API Gateway, transform it to a Python dictionary just like WSGI applications expect it, and then pass it on to our application, in the same way the web server would?
If you need a little help picturing how this would work, here is some pseudo-code of a Lambda function:
from wsgi import app # this is the WSGI application instance # This is the lambda function entry point. # This function is invoked by API Gateway when a request is received def lambda_handler(event, context): # generate a WSGI request environment from the data sent by API Gateway environ, start_response = make_wsgi_request(event, context) # invoke the WSGI application response = application(environ, start_response) # convert the WSGI response to the API Gateway response return make_api_gateway_response(response)
In case you are not too familiar with the WSGI protocol, the
start_response arguments are part of the standard. All WSGI applications, no matter the framework, are invoked with these two arguments. So the above idea could easily make web applications written for Django, Flask, etc. deployable as serverless functions to AWS!
As it turns out, there is already tooling to automatically deploy WSGI applications to AWS Lambda and API Gateway using the method described in the previous section. The most popular project that does this for Python applications is called Zappa. With Zappa, you just need to write a configuration file that describes your project, and then a
zappa deploy <configuration> command is all it takes to deploy the project.
Zappa has one thing I personally don't like. I want my cloud deployments to be managed through an orchestration service, as this gives you a single place from where all the resources that belong to a deployment originate. For AWS, you can use the native Cloudformation service for this. Unfortunately Zappa uses Cloudformation only for a portion of its deployment tasks, and for the rest, it invokes AWS APIs directly. This makes keeping track of resources associated with a Zappa deployment harder.
A while ago I decided to try to build a tool similar to Zappa, but with the constraint that its output should be a Cloudformation template that represents the entire deployment. This template is then sent to the Cloudformaton service and that is where the deployment takes place. If you were paranoid about giving a third party tool access to your AWS account, you could just generate the template offline, and then run it yourself by hand. This is why the main idea behind Slam. In all honesty, I was not able to generate a 100% Cloudformation based deployment yet, but I got pretty close, there are only a couple of minor tasks that need to be done outside of the Cloudformation template that I hope one day I will be able to eliminate as AWS improves the features available in Cloudformation templates.
Introduction to Slam
Slam is a serverless deployment tool that allows you to deploy your Python functions or web applications to AWS Lambda, API Gateway and DynamoDB. As explained in the previous section, it does so by generating a Cloudformation template that describes the deployment in its entirety. If you are deploying a web application, the Lambda function is deployed using the techniques described above to adapt API Gateway invocations to the WSGI format.
The package is called Slam, and can be installed as usual with pip:
$ pip install slam
Instead of writing here an instruction manual on how to deploy a project with this tool, a while ago I recorded myself in a short video demonstrating how it works:
There is also a detailed tutorial in the documentation in case you want to try this yourself. I would love it if you give Slam a try, and then let me know what you think of it.
No matter if you use Slam or the more popular and featured Zappa, the fact is that deploying a Python project to serverless AWS couldn't be easier, and given the low cost these services have for services with low or medium traffic needs, I expect adoption will continue to grow, at least until the industry discovers the next big thing in the world of cloud deployments, whatever that may be.