[Solved]-Python multiprocessing Pool hangs on ubuntu server


This seems to be a variation on Using python's Multiprocessing makes response hang on gunicorn so possibly this is a dupe.

That said do you have to use multiprocessing (MP)? You might honestly be better served farming this out to something like Celery. MP might be getting killed with the gunicorn worker when it dies since it owns the MP process. Depending on the server config, that could happen pretty frequently. If you’ve got some very long running kind of job you can still farm that out to Celery, it’s just a bit more configuration.



Are you using some async Gunicorn worker? If so, try the default sync worker and see if you can reproduce the problem.

If the problem can only be reproduced when using async workers, you should make sure the multiprocessing module is patched correctly.




pool = Pool(processes = cpu_count)


pool = Pool(cpu_count)

This assumes that you’ve imported Pool from multiprocessing otherwise you’ll need to do



I had a similar problem. I solved it by giving each gunicorn worker a fixed amount of tasks to execute setting maxtasksperchild argument like this Pool(..., maxtasksperchild=1). In this way each gunicorn worker is freed up automatically after completing the given tasks.

This is what Pool documentation tells:

Worker processes within a Pool typically live for the complete duration of the Pool’s work queue. A frequent pattern found in other systems (such as Apache, mod_wsgi, etc) to free resources held by workers is to allow a worker within a pool to complete only a set amount of work before being exiting, being cleaned up and a new process spawned to replace the old one. The maxtasksperchild argument to the Pool exposes this ability to the end user.

Leave a comment