Toggle Menu

Insights / Tech Tips / Scaling Django Channels with Docker

August 02, 2016

Scaling Django Channels with Docker

4 mins read

Earlier this summer, I attended PyCon in Portland, Oregon. The talk that excited me the most, by far, was Andrew Godwin’s introduction to Django Channels. In a nutshell, Django Channels creates a simple mechanism for writing web applications in Django that support the HTTP2 Websockets protocol. Websockets are an exciting way to implement asynchronous content, like chat rooms and live feeds, where updates can be pushed without the need for polling.

Django Channels is fairly easy to get running, especially if you reference Andrew Godwin’s sample applications. One thing Andrew’s examples don’t highlight, however, is that the Channels workers are incredibly easy to scale.

Another Django Channels tutorial, written by Django core developer Jacob Kaplan-Moss, covers how to deploy and scale a Django Channels application in Heroku. Part of his guide addresses the need to scale Channels workers as load increases. Workers essentially take Channels tasks off the Redis queue and executes them, similar to Celery. I suggest you read his guide, it’s fantastic, but the excerpt below highlights how simple it is to scale the Channels worker process in Heroku:

[pcsh lang=”python” tab_size=”4″ message=”” hl_lines=”” provider=”manual”]

heroku ps:scale worker=3

[/pcsh]

Easy right? Of course, not everyone uses Heroku for deploying web applications. If you’re using another cloud service or hosting your own infrastructure, technically all you need to do is run the following command on several different servers:

[pcsh lang=”python” tab_size=”4″ message=”” hl_lines=”” provider=”manual”]

./manage.py runworker 

[/pcsh]

The logistics of managing the worker processes is not trivial using the solution above, especially if you want your worker capacity to be dynamic based on the current or predicted load.

Splitting and scaling the web interface and worker processes is a great use case for containerizing with Docker and Docker Compose. Let’s go back to Andrew Godwin’s channels-examples repo, where you’ll find docker-compose.yml configurations for running his samples. If you look closely, you’ll notice that the configuration defines a single Django container that executes {code}python manage.py runserver{code}, which is not suitable for production, nor is it scalable.

In the following example, I’ll be updating Andrew’s multichat app to make it scalable. Follow the instructions in Andrew’s README, but replace the default docker-compose.yml with the following before running any docker-compose commands:

New docker-compose.yml:

[pcsh lang=”python” tab_size=”4″ message=”” hl_lines=”” provider=”manual”]

version: "2"

services:
 redis:
  image: redis:latest
 web:
  build: .
  command: daphne -b 0.0.0.0 -p 8000 multichat.asgi:channel_layer
  volumes:
   - .:/code
  ports:
   - "8000:8000"
  links:
   - redis
 worker:
  build: .
  command: python manage.py runworker
  volumes:
   - .:/code
  links:
   - redis

[/pcsh]

Running `./manage.py runserver`, as Andrew’s original docker-compose.yml did, will create a single process with two threads: the core Django interface and a Channels worker process. Since we want to scale the worker processes, the new docker-compose.yml breaks the two threads into separate containers: ‘web’ and ‘worker’.

The web container no longer uses runserver, but instead uses daphne, which is a web server like Gunicorn, but supports ASGI, a Websockets-compatible derivative of WSGI. Daphne is only serving the core Django interface, so the worker process is expected to run separately. This is essentially the same stack created by Jacob in his Heroku guide.

With the new docker-compose.yml in place, running {code}docker-compose up -d{code} creates three containers: Redis, the Django web server, and the Channels worker process. Channels tasks are received by the web server, placed in a Redis queue, and then consumed by the worker process.

[pcsh lang=”python” tab_size=”4″ message=”” hl_lines=”” provider=”manual”]

~/channels-examples/multichat# docker-compose ps
    Name           Command        State      Ports
------------------------------------------------------------------------------------
multichat_redis_1  docker-entrypoint.sh redis ...  Up   6379/tcp
multichat_web_1   daphne -b 0.0.0.0 -p 8000 ...  Up   0.0.0.0:8000->8000/tcp
multichat_worker_1  python manage.py runworker    Up

[/pcsh]

At this point, scaling the workers is just as easy as Heroku:

[pcsh lang=”python” tab_size=”4″ message=”” hl_lines=”” provider=”manual”]

docker-compose scale worker=3

[/pcsh]

Listing the processes show two new workers:

[pcsh lang=”python” tab_size=”4″ message=”” hl_lines=”” provider=”manual”]

~/channels-examples/multichat# docker-compose ps
    Name           Command        State      Ports
------------------------------------------------------------------------------------
multichat_redis_1  docker-entrypoint.sh redis ...  Up   6379/tcp
multichat_web_1   daphne -b 0.0.0.0 -p 8000 ...  Up   0.0.0.0:8000->8000/tcp
multichat_worker_1  python manage.py runworker    Up
multichat_worker_2  python manage.py runworker    Up
multichat_worker_3  python manage.py runworker    Up

[/pcsh]

Scaling the Redis and web containers is also possible, but since they bind to static ports they require a load balancer to scale properly. This is not difficult, but is beyond the scope of this guide. The worker processes don’t need a load balancer because the processes are simple consumers and are not binding to any ports.

For a production deployment, you’d probably want to plug Compose into a Swarm cluster so that the worker processes actually run on separate machines. Other tools exist for managing clusters, such as Mesos and Kubernetes. Regardless of the tool you use, the concept is simple: once workers are containerized, scaling is as easy as creating more instances.

You Might Also Like

Resources

Overcoming Obstacles to Continuous Improvement in Your Organization

Does driving change in your organization sometimes feel like an uphill climb? You’ve tried implementing...

Resources

Responsible AI for Federal Programs

Excella AI Engineer, Melisa Bardhi, join host John Gilroy of Federal Tech Podcast to examine how artificial intelligence...