[Solved]-Python Django Asynchronous Request handling


Asynchronous tasks can be accomplished in Python using Celery. You can simply push the task to Celery queue and the task will be performed in an asynchronous way. You can then do some polling from the result page to check if it is completed.

Other alternative can be something like Tornado.


Another strategy is to writing a threading class that starts up custom management commands you author to behave as worker threads. This is perhaps a little lighter weight than working with something like celery, and of course has both advantages and disadvantages. I also used this technique to sequence/automate migration generation/application during application startup (because it lives in a pipeline). My gunicorn startup script then starts these threads in pre_exec() or when_ready(), etc, as appropriate, and then stops them in on_exit().

# Description: Asychronous Worker Threading via Django Management Commands
# Lets you run an arbitrary Django management command, either a pre-baked one like migrate,
# or a custom one that you've created, as a worker thread, that can spin forever, or not.
# You can use this to take care of maintenance tasks at start-time, like db migration,
# db flushing, etc, or to run long-running asynchronous tasks.  
# I sometimes find this to be a more useful pattern than using something like django-celery,
# as I can debug/use the commands I write from the shell as well, for administrative purposes.

import json
import os
import requests
import sys
import time
import uuid
import logging
import threading
import inspect
import ctypes

from django.core.management import call_command
from django.conf import settings

class DjangoWorkerThread(threading.Thread):
    Initializes a seperate thread for running an arbitrary Django management command.  This is
    one (simple) way to make asynchronous worker threads.  There exist richer, more complex
    ways of doing this in Django as well (django-cerlery).

    The advantage of this pattern is that you can run the worker from the command line as well,
    via manage.py, for the sake of rapid development, easy testing, debugging, management, etc.

    :param commandname: name of a properly created Django management command, which exists
        inside the app/management/commands folder in one of the apps in your project.

    :param arguments: string containing command line arguments formatted like you would
        when calling the management command via manage.py in a shell

    :param restartwait: integer seconds to wait before restarting worker if it dies,
        or if a once-through command, acts as a thread-loop delay timer

    def __init__(self, commandname,arguments="",restartwait=10,logger=""):
        super(DjangoWorkerThread, self).__init__()
        self.commandname = commandname
        self.arguments = arguments
        self.restartwait = restartwait
        self.name = commandname
        self.event = threading.Event()
        if logger:
            self.l = logger
            self.l = logging.getLogger('root')

    def run(self):
        Start the thread.

            exceptioncount = 0
            exceptionlimit = 10
            while not self.event.is_set():
                    if self.arguments:
                        self.l.info('Starting ' + self.name + '  worker thread with arguments ' + self.arguments)
                        self.l.info('Starting ' + self.name + '  worker thread with no arguments')

                except Exception as e:
                    self.l.error(self.commandname + ' Unkown error: {}'.format(str(e)))
                    exceptioncount += 1
                    if exceptioncount > exceptionlimit:
                        self.l.error(self.commandname + " : " + self.arguments + " : Exceeded exception retry limit, aborting.")
            self.l.info('Stopping command: ' + self.commandname + " " + self.arguments)

    def stop(self):
        """Nice Stop

        Stop nicely by setting an event.
        self.l.info("Sending stop event to self...")
        #then make sure it's dead...and schwack it harder if not.
        #kill it with fire!  be mean to your software.  it will make you write better code.
        self.l.info("Sent stop event, checking to see if thread died.")
        if self.isAlive():
            self.l.info("Still not dead, telling self to murder self...")
            time.sleep( 0.1 )

def start_worker(command_name, command_arguments="", restart_wait=10,logger=""):
    Starts a background worker thread running a Django management command. 

    :param str command_name: the name of the Django management command to run,
        typically would be a custom command implemented in yourapp/management/commands,
        but could also be used to automate standard Django management tasks
    :param str command_arguments: a string containing the command line arguments 
        to supply to the management command, formatted as if one were invoking
        the command from a shell
    if logger:
        l = logger
        l = logging.getLogger('root')

    # Start the thread
    l.info("Starting worker: "+ command_name + " : " + command_arguments + " : " + str(restart_wait) )
    worker = DjangoWorkerThread(command_name,command_arguments, restart_wait,l)
    l.info("Worker started: "+ command_name + " : " + command_arguments + " : " + str(restart_wait) )

    # Return the thread instance
    return worker


def stop_worker(worker,logger=""):
    Gracefully shutsdown the worker thread 

    :param threading.Thread worker: the worker thread object
    if logger:
        l = logger
        l = logging.getLogger('root')

    # Shutdown the thread
    l.info("Stopping worker: "+ worker.commandname + " : " + worker.arguments + " : " + str(worker.restartwait) )
    l.info("Worker stopped: "+ worker.commandname + " : " + worker.arguments + " : " + str(worker.restartwait) )


The long running task can be offloaded with Celery. You can still get all the updates and results. Your web application code should take care of polling for updates and results. http://blog.miguelgrinberg.com/post/using-celery-with-flask
explains how one can achieve this.

Some useful steps:

  1. Configure celery with result back-end.
  2. Execute the long running task asynchronously.
  3. Let the task update its state periodically or when it executes some stage in job.
  4. Poll from web application to get the status/result.
  5. Display the results on UI.

There is a need for bootstrapping it all together, but once done it can be reused and it is fairly performant.



It’s the same process that a synchronous request. You will use a View that should return a JsonResponse. The ‘tricky’ part is on the client side, where you have to make the async call to the view.


Leave a comment