...and then it crashed

Programming the web with Python, Django and Javascript.

Don't use Gunicorn to host your Django sites on Heroku

Gunicorn is a pure-Python HTTP server that’s widely used for deploying Django (and other Python) sites in production. Heroku is an excellent Platform As A Service (PAAS) provider that will host any Python HTTP application, and recommends using Gunicorn to power your apps.

Unfortunately, the process model of Gunicorn makes it unsuitable for running production Python sites on Heroku.

Gunicorn is designed to be used behind a buffering reverse proxy

Gunicorn uses a pre-forking process model by default. This means that network requests are handed off to a pool of worker processes, and that these worker processes take care of reading and writing the entire HTTP request to the client. If the client has a fast network connection, the entire request/response cycle takes a fraction of a second. However, if the client is slow (or deliberately misbehaving), the request can take much longer to complete.

Because Gunicorn has a relatively small (2x CPU cores) pool of workers, if can only handle a small number of concurrent requests. If all the worker processes become tied up waiting for network traffic, the entire server will become unresponsive. To the outside world, your web application will cease to exist.

For this reason, Guncorn strongly recommends that it is used behind a buffering reverse proxy, like Nginx. This means that the entire request and response will be buffered, protecting Gunicorn from delays caused by a slow network.

However, while Heroku does provide limited request/response buffering, large file uploads/downloads can still bypass the buffer. This means that your site is still trivially vulnerable to accidental (or deliberate) Denial of Service (DoS) attacks.

The Waitress HTTP server protects your from slow network clients

Waitress is a pure-Python HTTP server that supports request and response buffering, using in-memory and temporary file buffers to completely shield your Python application from slow network clients.

Waitress can be installed in your Heroku app using pip:

1
2
$ pip install waitress
$ pip freeze > requirements.txt

And then added to your Procfile like this:

1
web: waitress-serve --port=$PORT {project_name}.wsgi:application

Why not use Gunicorn async workers?

The Guncicorn docs suggest using an alternative async worker class when serving requests directly to the internet. This avoids the problem of slow network clients by allowing thousands of asyncronous HTTP requests to be processes in parallel.

Unfortunately, this approach introduces a different problem. The Django ORM will open a separate database connection for each request, quickly leading to thousands of simulataneous database connections being created. On the cheaper Heroku Postgres plans, this can easily cause requests to fail due to refused database connections.

By using a fixed pool of worker processes, Waitress makes it much easier to control the number of database connections being opened by Django, while still protecting you against slow network traffic.

Check out django-herokuapp on GitHub

For an easy quickstart, and a more in-depth guide to running Django apps on Heroku, please check out the django-herokuapp project on GitHub.

Comments