问题描述
我有一个Ubuntu EC2节点,我一直在使用使用芹菜
来建立一个异步其中一个Django项目。我一直在努力遵循http://michal.karzynski.pl/blog/2014/05/18/setting-up-an-asynchronous-task-queue-for-django-using-celery-redis/
我已经能够获得一个基本的任务,在命令行工作,使用:
(ENV1)的ubuntu @ IP-172-31-22-65:〜/项目/目标价$芹菜--app = tp.celery:应用工人--loglevel = INFO
(ENV1)的ubuntu @ IP-172-31-22-65:〜/项目/目标价$芹菜--app = tp.celery:应用工人--loglevel = INFO
--------------芹菜@ IP-172-31-22-65 v3.1.17(Cipater)
---- **** -----
--- ***** - Linux的3.13.0-44-仿制的x86_64-与-Ubuntu的14.04,值得信赖
- * - **** -
- ** ---------- [配置]
- ** ---------->。应用程序:TP:0x7f66a89c0470
- ** ---------->。交通:Redis的://本地主机:6379/0
- ** ---------->。结果:已禁用
- *** - * - >。并发:1(prefork)
- ******* ----
--- ***** ----- [队列]
-------------->芹菜交换=芹菜(直接)键=芹菜
不过,如果我运行其他芹菜命令,如下图所示,我发现了以下内容:
(ENV1)的ubuntu @ IP-172-31-22-65:〜/项目/目标价$芹菜工人[2015年4月3日13:17:21553:警告/ MainProcess ] /home/ubuntu/.virtualenvs/env1/lib/python3.4/site-packages/celery/apps/worker.py:161:
--------------芹菜@ IP-172-31-22-65 v3.1.17(Cipater)
---- **** -----
--- ***** - Linux的3.13.0-44-仿制的x86_64-与-Ubuntu的14.04,值得信赖
- * - **** -
- ** ---------- [配置]
- ** ---------->。应用程序:默认:0x7f1653eae7b8(.default.Loader)
- ** ---------->。交通:AMQP://嘉宾:** @本地:5672 //
- ** ---------->。结果:已禁用
- *** - * - >。并发:1(prefork)
- ******* ----
--- ***** ----- [队列]
-------------->芹菜交换=芹菜(直接)键=芹菜
[2015年4月3日13:17:21,571:ERROR / MainProcess]消费者:无法连接到AMQP://嘉宾:**@127.0.0.1:5672 //:[错误111]连接被拒绝。
看来,芹菜认为我使用AMQP作为一个经纪人,但我使用Redis的!!
根据<αhref="http://stackoverflow.com/questions/16176533/celery-tries-to-connect-to-the-wrong-broker">Celery尝试连接到错误的代理,它很可能是芹菜找不到配置文件,并使用默认值。
在上面的问题,他们建议:
导入您的芹菜,并添加你的经纪人这样的:
芹菜芹菜=('任务',经纪人='Redis的://127.0.0.1:6379)
celery.config_from_object(celeryconfig)
我会在哪里做?是我celery.py文件(如下图),一样的芹菜配置?
/projects/tp/tp/celery.py
从__future__进口absolute_import
进口OS
进口的Django
芹菜芹菜进口
从django.conf导入设置
#设置默认的Django设置模块的'芹菜'计划。
os.environ.setdefault('DJANGO_SETTINGS_MODULE','tp.settings')
django.setup()
应用=芹菜('hello_django)
#使用字符串在这里是指工人将不必
#在使用Windows时腌制的对象。
app.config_from_object('django.conf:设置)
app.autodiscover_tasks(拉姆达:settings.INSTALLED_APPS)
tasks.py:
从__future__进口absolute_import
芹菜进口shared_task
从django.core.cache导入缓存
@shared_task
高清tester1(参数):
返回%参数'与参数%的执行测试任务
TP / TP1 /意见
@csrf_exempt
高清测试仪(要求):
tester1.delay('喜')
返回的Htt presponse(HTML)
/etc/supervisor/conf.d/tp-celery.conf
[程序:TP-芹菜]
命令= /家庭/ Ubuntu的/ .virtualenvs / ENV1 /斌/芹菜--app = tp.celery:应用工人--loglevel = INFO
目录= /家庭/ Ubuntu的/项目/ TP
用户= Ubuntu的
numprocs = 1
stdout_logfile =的/ var /日志/芹菜工人out.log
stderr_logfile =的/ var /日志/芹菜工人err.log
自动启动=真
自动重启=真
startsecs = 10
;需要等待当前正在执行的任务来完成关机。
;增加这个,如果你已经很长时间运行的任务。
stopwaitsecs = 600
;当诉诸发送SIGKILL到程序终止
;发送SIGKILL到其全过程组,而不是,
;同时其照顾孩子也是如此。
killasgroup =真
/var/log/celery-worker-out.log
--------------芹菜@ IP-172-31-22-65 v3.1.17(Cipater)
---- **** -----
--- ***** - Linux的3.13.0-44-仿制的x86_64-与-Ubuntu的14.04,值得信赖
- * - **** -
- ** ---------- [配置]
- ** ----------&GT;。应用程序:TP:0x7fa33e424cf8
- ** ----------&GT;。交通:Redis的://本地主机:6379/0
- ** ----------&GT;。结果:已禁用
- *** - * - &GT;。并发:1(prefork)
- ******* ----
--- ***** ----- [队列]
--------------&GT;芹菜交换=芹菜(直接)键=芹菜
[任务]
。 testapp.tasks.tester1
不要运行芹菜woker
only..run像芹菜-A TP工人-l信息
。这将需要默认配置
。
有关芹菜检查
芹菜--app = tp.celery:应用程序检查active_queues
或只是
芹菜-A TP检查active_queues
I have a Django project on an Ubuntu EC2 node, which I have been using to set up an asynchronous using Celery
. I've been trying to follow http://michal.karzynski.pl/blog/2014/05/18/setting-up-an-asynchronous-task-queue-for-django-using-celery-redis/
I've been able to get a basic task working at the command line, using:
(env1)ubuntu@ip-172-31-22-65:~/projects/tp$ celery --app=tp.celery:app worker --loglevel=INFO
(env1)ubuntu@ip-172-31-22-65:~/projects/tp$ celery --app=tp.celery:app worker --loglevel=INFO
-------------- celery@ip-172-31-22-65 v3.1.17 (Cipater)
---- **** -----
--- * *** * -- Linux-3.13.0-44-generic-x86_64-with-Ubuntu-14.04-trusty
-- * - **** ---
- ** ---------- [config]
- ** ---------- .> app: tp:0x7f66a89c0470
- ** ---------- .> transport: redis://localhost:6379/0
- ** ---------- .> results: disabled
- *** --- * --- .> concurrency: 1 (prefork)
-- ******* ----
--- ***** ----- [queues]
-------------- .> celery exchange=celery(direct) key=celery
However, if I run other celery commands like below I'm getting the following:
(env1)ubuntu@ip-172-31-22-65:~/projects/tp$ celery worker [2015-04-03 13:17:21,553: WARNING/MainProcess] /home/ubuntu/.virtualenvs/env1/lib/python3.4/site-packages/celery/apps/worker.py:161:
-------------- celery@ip-172-31-22-65 v3.1.17 (Cipater)
---- **** -----
--- * *** * -- Linux-3.13.0-44-generic-x86_64-with-Ubuntu-14.04-trusty
-- * - **** ---
- ** ---------- [config]
- ** ---------- .> app: default:0x7f1653eae7b8 (.default.Loader)
- ** ---------- .> transport: amqp://guest:**@localhost:5672//
- ** ---------- .> results: disabled
- *** --- * --- .> concurrency: 1 (prefork)
-- ******* ----
--- ***** ----- [queues]
-------------- .> celery exchange=celery(direct) key=celery
[2015-04-03 13:17:21,571: ERROR/MainProcess] consumer: Cannot connect to amqp://guest:**@127.0.0.1:5672//: [Errno 111] Connection refused.
It appears that celery thinks I'm using amqp as a broker , but I'm using redis!!
Based on Celery tries to connect to the wrong broker, it seems likely that celery cannot find the configuration file and uses the default.
In the above question they recommend:
import your celery and add your broker like that :
celery = Celery('task', broker='redis://127.0.0.1:6379')
celery.config_from_object(celeryconfig)
Where would I do this? is my celery.py file (below), the same as a celery config?
/projects/tp/tp/celery.py
from __future__ import absolute_import
import os
import django
from celery import Celery
from django.conf import settings
# set the default Django settings module for the 'celery' program.
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'tp.settings')
django.setup()
app = Celery('hello_django')
# Using a string here means the worker will not have to
# pickle the object when using Windows.
app.config_from_object('django.conf:settings')
app.autodiscover_tasks(lambda: settings.INSTALLED_APPS)
tasks.py:
from __future__ import absolute_import
from celery import shared_task
from django.core.cache import cache
@shared_task
def tester1(param):
return 'The test task executed with argument "%s" ' % param
tp/tp1/views
@csrf_exempt
def tester(request):
tester1.delay('hi')
return HttpResponse('hTML')
/etc/supervisor/conf.d/tp-celery.conf
[program:tp-celery]
command=/home/ubuntu/.virtualenvs/env1/bin/celery --app=tp.celery:app worker --loglevel=INFO
directory=/home/ubuntu/projects/tp
user=ubuntu
numprocs=1
stdout_logfile=/var/log/celery-worker-out.log
stderr_logfile=/var/log/celery-worker-err.log
autostart=true
autorestart=true
startsecs=10
; Need to wait for currently executing tasks to finish at shutdown.
; Increase this if you have very long running tasks.
stopwaitsecs = 600
; When resorting to send SIGKILL to the program to terminate it
; send SIGKILL to its whole process group instead,
; taking care of its children as well.
killasgroup=true
/var/log/celery-worker-out.log
-------------- celery@ip-172-31-22-65 v3.1.17 (Cipater)
---- **** -----
--- * *** * -- Linux-3.13.0-44-generic-x86_64-with-Ubuntu-14.04-trusty
-- * - **** ---
- ** ---------- [config]
- ** ---------- .> app: tp:0x7fa33e424cf8
- ** ---------- .> transport: redis://localhost:6379/0
- ** ---------- .> results: disabled
- *** --- * --- .> concurrency: 1 (prefork)
-- ******* ----
--- ***** ----- [queues]
-------------- .> celery exchange=celery(direct) key=celery
[tasks]
. testapp.tasks.tester1
Dont run celery woker
only..run like celery -A tp worker -l info
. It will take default config
.
For celery inspect
celery --app=tp.celery:app inspect active_queues
OR simply
celery -A tp inspect active_queues
这篇关于如何在Django设置celeryconfig文件的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!