Django is a powerful and easy web-development framework (after all, it's mascot is a magical pony). It provides many helpful APIs for faster development however when you finish creating your masterpiece, deploying it to a real server for use by real people can be overwhelming and even intimidating. This post is meant to share some of my experiences deploying Django apps which led me to some of my current practices. Also I will try explain some basic concepts of deploying a web app which will hopefully will make this post more usable (and readable) for the beginners. Finally, just a heads up that I am not an expert in deploying Django applications so please don't hesitate to comment to correct me or suggest new awesome things.

Request cycle


Before I get into any specifics, first a few words about the request cycle. The cycle can be broken into the following steps:

  1. A user makes an HTTP request to a server to a specific port (HTTP uses port 80 for example)
  2. Server-machine has a web-server program running (e.g. nginx, or Apache for all of PHP people) which listens to requests on specific port
  3. Web-server gets a user request
  4. Web-server parses the request
  5. Web-server processes the request
  6. Web-server returns the response to the user

Steps 1-4 and 6 are pretty self-explanatory. It's the step 5 where things are more complicated (when are they not). If the server determines in step 4 that the request is requesting a static file, in step 5 the web-server simply gets the file and returns it's content with proper headers in step 6. If the request however is for dynamic content, things become fun. At that point, since most web-servers do not generate dynamic content (ok, index pages do not count), the web-server has to get the content of the response from some other program. This action of web-server requesting the content brings us to the next section - the interfaces.


The web-application program which generates the dynamic content can be anything (or written in anything) - Python, Ruby, C, etc (hack, probably even in INTERCAL). It's because of this vastness of programs which can generate the content of the response, a spec is necessary for all communications between the web-server and the web-application. More formally this spec is called an interface.

One of the earliest interfaces in good old days when everyone used Apache was CGI (ok, CGI was not made specifically for Apache but I associate them together). It was very powerful because any script now could of generated content. The problem however was that CGI was very slow and resource-hungry. In CGI, for every request web-server had to fire up a whole new subprocess for the web-application to do it's thing. This made things sluggish because firing up new subprocess is expensive and it requires a lot of resources when you need to spin up a lot of them like any decent web-site would require.

To solve the CGI's performance, FastCGI (and SCGI but I'll refer to them both as FastCGI) came about. It solved the performance issue by not requiring the web-server to fire up a new subprocess for each request. Instead it launched the web-application as daemon process to which the web-server would hand over all requests. This means that a single daemon process would process many requests over it's life-span. That obviously reduces system-load because no new processes have to be started. Even though FastCGI is a great improvement over CGI, it still has a few issues. One of them is that it has a low-level API. That means that making FastCGI-compliant web-app is more tedious then it has to be. Another problem using FastCGI is that it does not provide any methods for implementing common functionality such as authentication. My final issue with FastCGI (or in this case Remote FastCGI) is when it is used in conjunction with reverse-proxy (load balancing) servers spanning multiple physical machines. In that case it gets a bit tricky to set it up.

To solve FastCGI issues Python folks developed a new interface called Web Server Gateway Interface (WSGI). It fixes all of the FastCGI issues by making the API simpler and by adding the ability to add middleware (e.g. auth). That's why WSGI is much better compared to CGI or FastCGI and by extension that's why we will use it to deploy our Django project.

Now that some theory is covered lets move on to more practical things. Hopefully the above concepts will help you to understand the code. When I was learning all of this, it's the conceptual things which were confusing and once I understood what was going on the code actually turned out to be rather simple.


To deploy Django project, it does not require a lot of understanding except one very important aspect - WSGI expects an application callable. What that means is that when using WSGI, you are suppose to provide to it a callable function/application/framework (callable(f) == True). WSGI will then call that callable for every request. While calling it, WSGI will also pass to it all the necessary information for the callable to process the request (e.g. request URL). The following is an example of a simple WSGI callable (demo_app):

# Example from

# import the WSGI callable
from wsgiref.simple_server import demo_app

# import the WSGI server which will take the callable 
from wsgiref.simple_server import make_server

# demo_app is a WSGI compliant callable which will be called for each request.
# In this case, it simply returns a Hello World message.
httpd = make_server('', 8000, demo_app)

# Start an infinite loop which listens to requests on port 8000,
# and then calls the demo_app for each request


I guess that you already know what Django is and how to use it since you are reading this post so I won't waste any time describing how awesome and magical it really is. What you might not know however is that Django as a framework implements a WSGI-compliant callable. That callable is responsible for invoking all of the Django components such as urlconfig, views, etc. Starting with Django 1.4, all projects created using $ startproject foo command include a file. That file defines a variable called application which is the Django's WSGI-compliant callable. You can for example pass that callable to the above example and let the magical ponies take over.


I mentioned in interfaces that one of the down sides of FastCGI is that you cannot easily deploy it with load-balancing over multiple machines (ok, it's not that difficult but it's not a clean and simple approach) however I did not mention how WSGI solves that. The solution is actually simple. Due to WSGI's simple API, there are many WSGI web-servers written in Python. Those are self-contained web-servers, meaning they take HTTP requests, process them using the given callable and return a HTTP response. Theoretically you don't need nginx and you can just pass your web traffic to those web-servers and it will work however please don't do that. Those servers do not do many of the things nginx does and does well such as protection against DoS attacks, SSL-support, serving static files, etc. What you should do however is fire up the WSGI-server for serving Django and put that server behind nginx using HTTP-Proxy. This makes the Django project and it's WSGI-server more autonomous/decoupled and it allows to easily deploy Django and nginx on multiple machines within the same LAN (or not but for obvious reasons LAN is much faster compared to WAN).

Now unto gevent. gevent is a library based on greenlets (light threads) which implements a WSGI-server. What makes it different though is that it is blazing fast. Nicholas Piƫl has an excellent post where he benchmarks many of the popular Python web-servers and gevent consistently outperforms the competition. That's why I like gevent and I usually use it to deploy my Django projects.

Using gevent is actually very simple. It's very similar to the sample WSGI-server above except instead of using WSGI-server from wsgiref package, you use one from gevent. Below is a gevent setup (I usually save it as so that I can easily run it by executing $ python

# import gevent WSGI-server
from gevent.wsgi import WSGIServer

# import the Django WSGI callable
from wsgi import application

# start the server by providing Django callable to gevent
WSGIServer(('', 8000), application, spawn=None).serve_forever()


Since I am lazy creating the above file for each of my projects, I created a django-gevent-deploy Django app which when installed (added to INSTALLED_APPS in it adds a custom rungevent command to With this command you to easily fire up the gevent WSGI server by executing $ python rungevent. The command's default parameters actually execute the same thing as in above example. You can check out more about the package at PYPI or at Github.


Now that we have our Django application running with gevent, it's time to link it with nginx. Since gevent runs a HTTP server, to link the two, all nginx has to do is redirect all its requests to gevent server. To redirect the requests nginx has a proxy_pass directive. To use the directive, all you have to specify is the IP and port of the gevent server and you are done. Below is an outline of the nginx config:

server {
    listen 80;
    # ...
    location / {
        proxy_pass http://localhost:8000; # make sure IP is correct
            # ...
    # ...

That is about it for the minimal setup. Now all you have to do is restart nginx and you are ready for your next project.


No matter how much you prepare things can always go wrong. One problem with the current setup is that you have to manually start gevent's WSGI server (executing the or $ python rungevent). That means that every time the computer crashes, you need to restart the computer, the Django exits due to uncaught exception you will have to manually restart gevent's WSGI server. It turns out that you don't have to if you use supervisor. Supervisor is a Python-written process management system - meaning it can manage your gevent server for you so that if it shuts down supervisor will automatically relaunch it. Adding supervisor can guarantee you that your masterpiece will always be online unless of course there is Armageddon (or simply the server is not plugged in). You can read more about supervisor on it's official site ( but the following is a supervisor program rule I use for my projects:

command=/path/to/virtualenv/bin/python rungevent


Hopefully this post will be helpful to you deploying your masterpiece. Here is a summary of what it takes to deploy Django app on gevent using nginx:

  1. Find the Django's WSGI callable (usually called application)
  2. Launch gevent's WSGIServer with proper parameters or use django-gevent-deploy package
  3. Add gevent's server to nginx config by using proxy_pass directive.
  4. Enjoy your work

The above can also be summarized in a simple diagram:

Finally if you have any questions/comments/suggestions please don't hesitate to comment below.


Some links you might find useful: