[Fixed]-RabbitMQ on EC2 Consuming Tons of CPU


Ok, I figured it out.

Here’s the relevant piece of documentation:

Old results will not be cleaned automatically, so you must make sure to consume the results or else the number of queues will eventually go out of control. If you’re running RabbitMQ 2.1.1 or higher you can take advantage of the x-expires argument to queues, which will expire queues after a certain time limit after they are unused. The queue expiry can be set (in seconds) by the CELERY_AMQP_TASK_RESULT_EXPIRES setting (not enabled by default).


To add to Eric Conner’s solution to his own problem, http://docs.celeryproject.org/en/latest/userguide/tasks.html#tips-and-best-practices states:

Ignore results you don’t want

If you don’t care about the results of a task, be sure to set the ignore_result option, as storing results wastes time and resources.

def mytask(…):

Results can even be disabled globally using the CELERY_IGNORE_RESULT setting.

That along with Eric’s answer is probably a bare minimum best practices for managing your results backend.

If you don’t need a results backend, set CELERY_IGNORE_RESULT or don’t set a results backend at all. If you do need a results backend, set CELERY_AMQP_TASK_RESULT_EXPIRES to be safeguarded against unused results building up. If you don’t need it for a specific app, set the local ignore as above.

Leave a comment