问题描述
如果我以前不知道执行了哪个任务,该如何提取任务的结果?
这是设置:
给出以下来源('tasks.py'):
How do I pull the result of a task if I do not know previously which task was performed?Here's the setup:Given the following source('tasks.py'):
from celery import Celery
app = Celery('tasks', backend="db+mysql://u:p@localhost/db", broker = 'amqp://guest:guest@localhost:5672//')
@app.task
def add(x,y):
return x + y
@app.task
def mul(x,y):
return x * y
在本地运行RabbitMQ 3.3.2的情况下:
with RabbitMQ 3.3.2 running locally:
marcs-mbp:sbin marcstreeter$ ./rabbitmq-server
RabbitMQ 3.3.2. Copyright (C) 2007-2014 GoPivotal, Inc.
## ## Licensed under the MPL. See http://www.rabbitmq.com/
## ##
########## Logs: /usr/local/var/log/rabbitmq/[email protected]
###### ## /usr/local/var/log/rabbitmq/[email protected]
##########
Starting broker... completed with 10 plugins.
Celery 3.1.12在本地运行:
with Celery 3.1.12 running locally:
-------------- [email protected] v3.1.12 (Cipater)
---- **** -----
--- * *** * -- Darwin-13.2.0-x86_64-i386-64bit
-- * - **** ---
- ** ---------- [config]
- ** ---------- .> app: tasks:0x105dea3d0
- ** ---------- .> transport: amqp://guest:**@localhost:5672//
- ** ---------- .> results: disabled
- *** --- * --- .> concurrency: 8 (prefork)
-- ******* ----
--- ***** ----- [queues]
-------------- .> celery exchange=celery(direct) key=celery
然后我可以导入该方法并使用'task_id':
I can then import the method and retrieve the result with the 'task_id':
from tasks import add, mul
from celery.result import AsyncResult
result = add.delay(2,2)
task_id = result.task_id
result.get() # 4
result = AsyncResult(id=task_id)
result.get() # 4
result = add.AsyncResult(id=task_id)
result.get() # 4
# and the same for the 'mul' task. Just imagine I put it here
在下一个示例中,我将这些步骤分解为两个过程。在一个过程中,我像这样检索'task_id':
In the next example I split up these steps between processes. In one process I retrieve the 'task_id' like so:
from tasks import add
result = add.delay(5,5)
task_id = result.task_id
然后在另一个过程中如果我使用相同的 task_id(复制并粘贴到另一个REPL,或在其他HTTP请求中),例如:
And in another process if I use the same 'task_id' (copied and pasted to another REPL, or in a different HTTP request) like so:
from celery.result import AsyncResult
result = AsyncResult(id="copied_task_id", backend="db+mysql://u:p@localhost/db")
result.get() # AttributeError: 'str' object has no attribute 'get_task_meta'
result.state # AttributeError: 'str' object has no attribute 'get_task_meta'
result.status # AttributeError: 'str' object has no attribute 'get_task_meta'
在另一个过程中,如果我这样做:
And in another process if I do:
from task import add # in this instance I know that an add task was performed
result = add.AsyncResult(id="copied_task_id")
result.status # "SUCCESSFUL"
result.state # "SUCCESSFUL"
result.get() # 10
希望能够在不事先知道是什么任务产生结果的情况下获得结果。在我的实际环境中,我计划将这个task_id返回给客户端,并让他们通过HTTP请求查询其工作状态。
I'd like to be able to get the result without knowing before hand what task is generating the result. In my real environment I plan on returning this task_id to the client and let them query the status of their job via an HTTP request.
推荐答案
文档我发现:
查询任务状态。
参数:
后端–请参见。
异常 TimeoutError
AsyncResult.app =没有
因此,没有提供backend参数,而是提供了 app参数,就像这样:
So instead of providing the backend parameter I provided the "app" argument instead like so:
from celery.result import AsyncResult
from task import app
# Assuming add.delay(10,10) was called in another process
# and that I'm using a 'task_id' I retrieved from that process
result = AsyncResult(id='copied_task_id', app=app)
result.state # 'SUCCESSFUL'
result.get() # 20
这很明显太多。对我来说不是。现在,我只能说这种解决方案行之有效,但是如果我知道这是批准的解决方案,我会感到更加自在。如果您知道文档中有更清楚的部分,请在评论中将其发布或作为答案,如果可以,我会选择它作为答案。
This is probably obvious to many. It wasn't to me. For now all I can say is that this solution "just works", but I'd feel more comfortable if I knew it was the sanctioned way to do it. If you know of a section in the documentation that makes this more clear please post it in the comments or as an answer and I'll select it as the answer if I can.
这篇关于从未知任务中检索Celery中“ task_id”的结果的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!