This document describes the current stable version of Celery (4.0). For development docs, go here.

First steps with Django

Using Celery with Django

Note

Previous versions of Celery required a separate library to work with Django, but since 3.1 this is no longer the case. Django is supported out of the box now so this document only contains a basic way to integrate Celery and Django. You’ll use the same API as non-Django users so you’re recommended to read the First Steps with Celery tutorial first and come back to this tutorial. When you have a working example you can continue to the Next Steps guide.

Note

Celery 4.0 supports Django 1.8 and newer versions. Please use Celery 3.1 for versions older than Django 1.8.

To use Celery with your Django project you must first define an instance of the Celery library (called an “app”)

If you have a modern Django project layout like:

- proj/
- proj/__init__.py
- proj/settings.py
- proj/urls.py
- manage.py

then the recommended way is to create a new proj/proj/celery.py module that defines the Celery instance:

file: proj/proj/celery.py
from __future__ import absolute_import, unicode_literals
import os
from celery import Celery # set the default Django settings module for the 'celery' program.
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'proj.settings') app = Celery('proj') # Using a string here means the worker don't have to serialize
# the configuration object to child processes.
# - namespace='CELERY' means all celery-related configuration keys
# should have a `CELERY_` prefix.
app.config_from_object('django.conf:settings', namespace='CELERY') # Load task modules from all registered Django app configs.
app.autodiscover_tasks() @app.task(bind=True)
def debug_task(self):
print('Request: {0!r}'.format(self.request))

Then you need to import this app in your proj/proj/__init__.py module. This ensures that the app is loaded when Django starts so that the @shared_taskdecorator (mentioned later) will use it:

proj/proj/__init__.py:

from __future__ import absolute_import, unicode_literals

# This will make sure the app is always imported when
# Django starts so that shared_task will use this app.
from .celery import app as celery_app __all__ = ['celery_app']

Note that this example project layout is suitable for larger projects, for simple projects you may use a single contained module that defines both the app and tasks, like in theFirst Steps with Celery tutorial.

Let’s break down what happens in the first module, first we import absolute imports from the future, so that our celery.py module won’t clash with the library:

from __future__ import absolute_import

Then we set the default DJANGO_SETTINGS_MODULE environment variable for thecelery command-line program:

os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'proj.settings')

You don’t need this line, but it saves you from always passing in the settings module to the celery program. It must always come before creating the app instances, as is what we do next:

app = Celery('proj')

This is our instance of the library, you can have many instances but there’s probably no reason for that when using Django.

We also add the Django settings module as a configuration source for Celery. This means that you don’t have to use multiple configuration files, and instead configure Celery directly from the Django settings; but you can also separate them if wanted.

The uppercase name-space means that all Celery configuration options must be specified in uppercase instead of lowercase, and start with CELERY_, so for example the task_always_eager` setting becomes CELERY_TASK_ALWAYS_EAGER, and thebroker_url setting becomes CELERY_BROKER_URL.

You can pass the object directly here, but using a string is better since then the worker doesn’t have to serialize the object.

app.config_from_object('django.conf:settings', namespace='CELERY')

Next, a common practice for reusable apps is to define all tasks in a separatetasks.py module, and Celery does have a way to auto-discover these modules:

app.autodiscover_tasks()

With the line above Celery will automatically discover tasks from all of your installed apps, following the tasks.py convention:

- app1/
- tasks.py
- models.py
- app2/
- tasks.py
- models.py

This way you don’t have to manually add the individual modules to theCELERY_IMPORTS setting.

Finally, the debug_task example is a task that dumps its own request information. This is using the new bind=True task option introduced in Celery 3.1 to easily refer to the current task instance.

Using the @shared_task decorator

The tasks you write will probably live in reusable apps, and reusable apps cannot depend on the project itself, so you also cannot import your app instance directly.

The @shared_task decorator lets you create tasks without having any concrete app instance:

demoapp/tasks.py:

# Create your tasks here
from __future__ import absolute_import, unicode_literals
from celery import shared_task @shared_task
def add(x, y):
return x + y @shared_task
def mul(x, y):
return x * y @shared_task
def xsum(numbers):
return sum(numbers)

See also

You can find the full source code for the Django example project at:https://github.com/celery/celery/tree/master/examples/django/

Relative Imports

You have to be consistent in how you import the task module. For example, if you haveproject.app in INSTALLED_APPS, then you must also import the tasks fromproject.app or else the names of the tasks will end up being different.

See Automatic naming and relative imports

Extensions

django-celery-results - Using the Django ORM/Cache as a result backend

The django-celery-results extension provides result backends using either the Django ORM, or the Django Cache framework.

To use this with your project you need to follow these steps:

  1. Install the django-celery-results library:

    $ pip install django-celery-results
    
  1. Add django_celery_results to INSTALLED_APPS.

    Note that there’s no dashes in this name, only underscores.

  2. Create the Celery database tables by performing a database migrations:

    $ python manage.py migrate django_celery_results
    
  3. Configure Celery to use the django-celery-results backend.

    Assuming you are using Django’s settings.py to also configure Celery, add the following settings:

    CELERY_RESULT_BACKEND = 'django-db'
    

    For the cache backend you can use:

    CELERY_RESULT_BACKEND = 'django-cache'
    

django-celery-beat - Database-backed Periodic Tasks with Admin interface.

See Using custom scheduler classes for more information.

Starting the worker process

In a production environment you’ll want to run the worker in the background as a daemon - see Daemonization - but for testing and development it is useful to be able to start a worker instance by using the celery worker manage command, much as you’d use Django’s manage.py runserver:

$ celery -A proj worker -l info

For a complete listing of the command-line options available, use the help command:

$ celery help

Where to go from here

If you want to learn more you should continue to the Next Steps tutorial, and after that you can study the User Guide.

This document describes the current stable version of Celery (4.0). For development docs, go here.

First steps with Django

Using Celery with Django

Note

Previous versions of Celery required a separate library to work with Django, but since 3.1 this is no longer the case. Django is supported out of the box now so this document only contains a basic way to integrate Celery and Django. You’ll use the same API as non-Django users so you’re recommended to read the First Steps with Celery tutorial first and come back to this tutorial. When you have a working example you can continue to the Next Steps guide.

Note

Celery 4.0 supports Django 1.8 and newer versions. Please use Celery 3.1 for versions older than Django 1.8.

To use Celery with your Django project you must first define an instance of the Celery library (called an “app”)

If you have a modern Django project layout like:

- proj/
- proj/__init__.py
- proj/settings.py
- proj/urls.py
- manage.py

then the recommended way is to create a new proj/proj/celery.py module that defines the Celery instance:

file: proj/proj/celery.py
from __future__ import absolute_import, unicode_literals
import os
from celery import Celery # set the default Django settings module for the 'celery' program.
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'proj.settings') app = Celery('proj') # Using a string here means the worker don't have to serialize
# the configuration object to child processes.
# - namespace='CELERY' means all celery-related configuration keys
# should have a `CELERY_` prefix.
app.config_from_object('django.conf:settings', namespace='CELERY') # Load task modules from all registered Django app configs.
app.autodiscover_tasks() @app.task(bind=True)
def debug_task(self):
print('Request: {0!r}'.format(self.request))

Then you need to import this app in your proj/proj/__init__.py module. This ensures that the app is loaded when Django starts so that the @shared_taskdecorator (mentioned later) will use it:

proj/proj/__init__.py:

from __future__ import absolute_import, unicode_literals

# This will make sure the app is always imported when
# Django starts so that shared_task will use this app.
from .celery import app as celery_app __all__ = ['celery_app']

Note that this example project layout is suitable for larger projects, for simple projects you may use a single contained module that defines both the app and tasks, like in theFirst Steps with Celery tutorial.

Let’s break down what happens in the first module, first we import absolute imports from the future, so that our celery.py module won’t clash with the library:

from __future__ import absolute_import

Then we set the default DJANGO_SETTINGS_MODULE environment variable for thecelery command-line program:

os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'proj.settings')

You don’t need this line, but it saves you from always passing in the settings module to the celery program. It must always come before creating the app instances, as is what we do next:

app = Celery('proj')

This is our instance of the library, you can have many instances but there’s probably no reason for that when using Django.

We also add the Django settings module as a configuration source for Celery. This means that you don’t have to use multiple configuration files, and instead configure Celery directly from the Django settings; but you can also separate them if wanted.

The uppercase name-space means that all Celery configuration options must be specified in uppercase instead of lowercase, and start with CELERY_, so for example the task_always_eager` setting becomes CELERY_TASK_ALWAYS_EAGER, and thebroker_url setting becomes CELERY_BROKER_URL.

You can pass the object directly here, but using a string is better since then the worker doesn’t have to serialize the object.

app.config_from_object('django.conf:settings', namespace='CELERY')

Next, a common practice for reusable apps is to define all tasks in a separatetasks.py module, and Celery does have a way to auto-discover these modules:

app.autodiscover_tasks()

With the line above Celery will automatically discover tasks from all of your installed apps, following the tasks.py convention:

- app1/
- tasks.py
- models.py
- app2/
- tasks.py
- models.py

This way you don’t have to manually add the individual modules to theCELERY_IMPORTS setting.

Finally, the debug_task example is a task that dumps its own request information. This is using the new bind=True task option introduced in Celery 3.1 to easily refer to the current task instance.

Using the @shared_task decorator

The tasks you write will probably live in reusable apps, and reusable apps cannot depend on the project itself, so you also cannot import your app instance directly.

The @shared_task decorator lets you create tasks without having any concrete app instance:

demoapp/tasks.py:

# Create your tasks here
from __future__ import absolute_import, unicode_literals
from celery import shared_task @shared_task
def add(x, y):
return x + y @shared_task
def mul(x, y):
return x * y @shared_task
def xsum(numbers):
return sum(numbers)

See also

You can find the full source code for the Django example project at:https://github.com/celery/celery/tree/master/examples/django/

Relative Imports

You have to be consistent in how you import the task module. For example, if you haveproject.app in INSTALLED_APPS, then you must also import the tasks fromproject.app or else the names of the tasks will end up being different.

See Automatic naming and relative imports

Extensions

django-celery-results - Using the Django ORM/Cache as a result backend

The django-celery-results extension provides result backends using either the Django ORM, or the Django Cache framework.

To use this with your project you need to follow these steps:

  1. Install the django-celery-results library:

    $ pip install django-celery-results
    
  1. Add django_celery_results to INSTALLED_APPS.

    Note that there’s no dashes in this name, only underscores.

  2. Create the Celery database tables by performing a database migrations:

    $ python manage.py migrate django_celery_results
    
  3. Configure Celery to use the django-celery-results backend.

    Assuming you are using Django’s settings.py to also configure Celery, add the following settings:

    CELERY_RESULT_BACKEND = 'django-db'
    

    For the cache backend you can use:

    CELERY_RESULT_BACKEND = 'django-cache'
    

django-celery-beat - Database-backed Periodic Tasks with Admin interface.

See Using custom scheduler classes for more information.

Starting the worker process

In a production environment you’ll want to run the worker in the background as a daemon - see Daemonization - but for testing and development it is useful to be able to start a worker instance by using the celery worker manage command, much as you’d use Django’s manage.py runserver:

$ celery -A proj worker -l info

For a complete listing of the command-line options available, use the help command:

$ celery help

Where to go from here

If you want to learn more you should continue to the Next Steps tutorial, and after that you can study the User Guide.

Please help support this community project with a donation: 

Previous topic

Django

Next topic

Contributing

This Page

Quick search

 

Using Celery with Djang的更多相关文章

  1. Django部署以及整合celery

    前言 Djngo部署的结构一般都是nginx+uwsgi+python web 一.新建一个Djang项目并合并celery 项目名随便打的..命名规范驼峰啥的别和我扯犊子哈 跑一下,然后我们就有一个 ...

  2. django -- Celery实现异步任务

    1. 环境 python==2.7 djang==1.11.2 # 1.8, 1.9, 1.10应该都没问题 celery-with-redis==3.0 # 需要用到redis作为中间人服务(Bro ...

  3. django —— Celery实现异步和定时任务

    1. 环境 python==2.7 djang==1.11.2 # 1.8, 1.9, 1.10应该都没问题 celery-with-redis==3.0 # 需要用到redis作为中间人服务(Bro ...

  4. 异步任务队列Celery在Django中的使用

    前段时间在Django Web平台开发中,碰到一些请求执行的任务时间较长(几分钟),为了加快用户的响应时间,因此决定采用异步任务的方式在后台执行这些任务.在同事的指引下接触了Celery这个异步任务队 ...

  5. celery使用的一些小坑和技巧(非从无到有的过程)

    纯粹是记录一下自己在刚开始使用的时候遇到的一些坑,以及自己是怎样通过配合redis来解决问题的.文章分为三个部分,一是怎样跑起来,并且怎样监控相关的队列和任务:二是遇到的几个坑:三是给一些自己配合re ...

  6. tornado+sqlalchemy+celery,数据库连接消耗在哪里

    随着公司业务的发展,网站的日活数也逐渐增多,以前只需要考虑将所需要的功能实现就行了,当日活越来越大的时候,就需要考虑对服务器的资源使用消耗情况有一个清楚的认知.     最近老是发现数据库的连接数如果 ...

  7. celery 框架

    转自:http://www.cnblogs.com/forward-wang/p/5970806.html 生产者消费者模式 在实际的软件开发过程中,经常会碰到如下场景:某个模块负责产生数据,这些数据 ...

  8. celery使用方法

    1.celery4.0以上不支持windows,用pip安装celery 2.启动redis-server.exe服务 3.编辑运行celery_blog2.py !/usr/bin/python c ...

  9. Celery的实践指南

    http://www.cnblogs.com/ToDoToTry/p/5453149.html Celery的实践指南   Celery的实践指南 celery原理: celery实际上是实现了一个典 ...

随机推荐

  1. 两listview联动

    package com.mttz; import java.util.ArrayList;import java.util.List; import com.mttz.adapter.CaiDanAD ...

  2. (七)Maven使用的最佳实践

    这里说一下在使用Maven过程中不是必须的,但十分有用的几个实践,关键时刻或许能解决您的问题. 1.设置MAVEN_OPTS环境变量 通常需要设置MAVEN_OPTS的值为-Xms128m -Xmx5 ...

  3. 在VMware上安装VMTools

    1. 什么是VMtools VM tools顾名思义就是Vmware的一组工具(关于如何在虚拟机上安装Linux,可以参考我之前的博文:http://www.cnblogs.com/libingbin ...

  4. springmvc集成shiro例子

    仅供参考 仅供参考 登录部分 代码: @RequestMapping(value = "/login", method = RequestMethod.GET) @Response ...

  5. Linux:常用命令

    文件压缩.解压 网络.进程 磁盘.文件使用情况 内存使用 1.文件压缩.解压 1)tar.gz文件解压: .bin.tar.gz 解压到指定目录: (指定的目录是存在的) .bin. 2)zip 文件 ...

  6. CentOS安装JDK-1.7

    注:以下所有操作均在CentOS 6.5 x86_64位系统下完成. #准备工作# 准备用rpm下载前,看系统是否已经安装有JDK,如果没有则进入正式安装步骤. # rpm -qa | grep jd ...

  7. css3

    CSS3的换行 如果某个单词太长,不适合在一个区域内,它扩展到外面: 自动换行属性允许您强制文本换行 - 即使这意味着分裂它中间的一个字: 允许长文本换行: p {word-wrap:break-wo ...

  8. 【原】为什么选择iPhone5的分辨率作为H5视觉稿尺寸

    [20160105更新:可以用iPhone6分辨率为视觉稿尺寸啦] 又是一年的520网络情人节,深圳这边却下了大雨,这雨只能是单身汉的泪,而对于我来说这一天具有特别的意义,一来怀念父亲,二来对我这种结 ...

  9. 揭开C++类中虚表的“神秘面纱”

    C++类中的虚表结构是C++对象模型中一个重要的知识点,这里咱们就来深入分析下虚表的在内存中的结构. C++一个类中有虚函数的话就会有一个虚表指针,其指向对应的虚表,一般一个类只会有一个虚表,每个虚表 ...

  10. maven 详解

    Maven是基于项目对象模型(POM)的,可以通过一小段描述信息来管理项目构建,报告和文档的软件项目管理工具,是一种全新的项目构建方式,让我们的开发更加简单,高效.Maven主要做的是两件事: 开发规 ...