问题描述
我正在尝试在EB上启动一个芹菜工作者,但遇到的错误并不能解释太多.
I am trying to start a celery worker on EB but get an error which doesn't explain much.
在.ebextensions dir
中的配置文件中的命令:
Command in config file in .ebextensions dir
:
03_celery_worker:
command: "celery worker --app=config --loglevel=info -E --workdir=/opt/python/current/app/my_project/"
列出的命令在我的本地计算机上工作正常(只需更改workdir参数).
The listed command works fine on my local machine (just change workdir parameter).
来自EB的错误:
和
我已使用参数--uid=2
更新了celery worker命令,特权错误消失了,但由于以下原因,命令执行仍然失败
I have updated celery worker command with parameter --uid=2
and privileges error disappeared but command execution is still failed due to
有什么建议我做错了吗?
Any suggestions what I do wrong?
推荐答案
据我了解,这意味着无法从EB容器命令运行列出的命令.需要在服务器上创建脚本并从该脚本运行celery. 这篇文章介绍了操作方法.
As I understand it means that listed command cannot be run from EB container commands. It is needed to create a script on the server and run celery from the script. This post describes how to do it.
更新:需要在.ebextensions
目录中创建配置文件.我称它为celery.config
.链接到上面的文章提供了一个几乎可以正常运行的脚本.需要进行一些小的补充才能100%正确地工作.我在安排定期任务时遇到了问题(芹菜节拍).以下是有关解决方法的步骤:
Update:It is needed to create a config file in .ebextensions
directory. I called it celery.config
. Link to the post above provides a script which works almost fine. It is needed to make some minor additions to work 100% correct. I had issues with schedule periodic tasks (celery beat). Below are steps on how to fix is:
-
安装(添加至要求) django-celery beat
pip install django-celery-beat
,将其添加到已安装的应用中,并在启动celery beat时使用--scheduler
参数.有关说明,请此处.
Install (add to requirements) django-celery beat
pip install django-celery-beat
, add it to installed apps and use--scheduler
parameter when starting celery beat. Instructions are here.
在脚本中,您指定运行脚本的用户.对于芹菜工作者,它是celery
用户,该用户已在脚本的前面添加(如果不存在).当我尝试开始芹菜拍打时,出现错误 PermissionDenied .这意味着 celery 用户没有所有必要的权限.使用ssh,我登录了EB,查看了所有用户(cat /etc/passwd
)的列表,并决定使用 daemon 用户.
In the script you specify user which run the script. For celery worker it is celery
user which was added earlier in the script (if doesn't exist). When I tried to start celery beat I got error PermissionDenied. It means that celery user doesn't have all necessary rights. using ssh I logged in to EB, looked a list of all users (cat /etc/passwd
) and decided to use daemon user.
列出的步骤解决了芹菜跳动错误.下面是使用脚本更新的配置文件(celery.config):```文件: "/opt/elasticbeanstalk/hooks/appdeploy/post/run_supervised_celeryd.sh": 模式:"000755" 所有者:root 组:根 内容: #!/usr/bin/env bash
Listed steps resolved celery beat errors. Updated config file with the script is below (celery.config):```files: "/opt/elasticbeanstalk/hooks/appdeploy/post/run_supervised_celeryd.sh": mode: "000755" owner: root group: root content: | #!/usr/bin/env bash
# Create required directories
sudo mkdir -p /var/log/celery/
sudo mkdir -p /var/run/celery/
# Create group called 'celery'
sudo groupadd -f celery
# add the user 'celery' if it doesn't exist and add it to the group with same name
id -u celery &>/dev/null || sudo useradd -g celery celery
# add permissions to the celery user for r+w to the folders just created
sudo chown -R celery:celery /var/log/celery/
sudo chown -R celery:celery /var/run/celery/
# Get django environment variables
celeryenv=`cat /opt/python/current/env | tr '\n' ',' | sed 's/%/%%/g' | sed 's/export //g' | sed 's/$PATH/%(ENV_PATH)s/g' | sed 's/$PYTHONPATH//g' | sed 's/$LD_LIBRARY_PATH//g'`
celeryenv=${celeryenv%?}
# Create CELERY configuration script
celeryconf="[program:celeryd]
directory=/opt/python/current/app
; Set full path to celery program if using virtualenv
command=/opt/python/run/venv/bin/celery worker -A config.celery:app --loglevel=INFO --logfile=\"/var/log/celery/%%n%%I.log\" --pidfile=\"/var/run/celery/%%n.pid\"
user=celery
numprocs=1
stdout_logfile=/var/log/celery-worker.log
stderr_logfile=/var/log/celery-worker.log
autostart=true
autorestart=true
startsecs=10
; Need to wait for currently executing tasks to finish at shutdown.
; Increase this if you have very long running tasks.
stopwaitsecs = 60
; When resorting to send SIGKILL to the program to terminate it
; send SIGKILL to its whole process group instead,
; taking care of its children as well.
killasgroup=true
; if rabbitmq is supervised, set its priority higher
; so it starts first
priority=998
environment=$celeryenv"
# Create CELERY BEAT configuraiton script
celerybeatconf="[program:celerybeat]
; Set full path to celery program if using virtualenv
command=/opt/python/run/venv/bin/celery beat -A config.celery:app --loglevel=INFO --scheduler django_celery_beat.schedulers:DatabaseScheduler --logfile=\"/var/log/celery/celery-beat.log\" --pidfile=\"/var/run/celery/celery-beat.pid\"
directory=/opt/python/current/app
user=daemon
numprocs=1
stdout_logfile=/var/log/celerybeat.log
stderr_logfile=/var/log/celerybeat.log
autostart=true
autorestart=true
startsecs=10
; Need to wait for currently executing tasks to finish at shutdown.
; Increase this if you have very long running tasks.
stopwaitsecs = 60
; When resorting to send SIGKILL to the program to terminate it
; send SIGKILL to its whole process group instead,
; taking care of its children as well.
killasgroup=true
; if rabbitmq is supervised, set its priority higher
; so it starts first
priority=999
environment=$celeryenv"
# Create the celery supervisord conf script
echo "$celeryconf" | tee /opt/python/etc/celery.conf
echo "$celerybeatconf" | tee /opt/python/etc/celerybeat.conf
# Add configuration script to supervisord conf (if not there already)
if ! grep -Fxq "celery.conf" /opt/python/etc/supervisord.conf
then
echo "[include]" | tee -a /opt/python/etc/supervisord.conf
echo "files: uwsgi.conf celery.conf celerybeat.conf" | tee -a /opt/python/etc/supervisord.conf
fi
# Enable supervisor to listen for HTTP/XML-RPC requests.
# supervisorctl will use XML-RPC to communicate with supervisord over port 9001.
# Source: https://askubuntu.com/questions/911994/supervisorctl-3-3-1-http-localhost9001-refused-connection
if ! grep -Fxq "[inet_http_server]" /opt/python/etc/supervisord.conf
then
echo "[inet_http_server]" | tee -a /opt/python/etc/supervisord.conf
echo "port = 127.0.0.1:9001" | tee -a /opt/python/etc/supervisord.conf
fi
# Reread the supervisord config
supervisorctl -c /opt/python/etc/supervisord.conf reread
# Update supervisord in cache without restarting all services
supervisorctl -c /opt/python/etc/supervisord.conf update
# Start/Restart celeryd through supervisord
supervisorctl -c /opt/python/etc/supervisord.conf restart celeryd
supervisorctl -c /opt/python/etc/supervisord.conf restart celerybeat
命令: 01_killotherbeats: 命令:"ps auxww | grep'芹菜节拍'| awk'{print $ 2}'| sudo xargs kill -9 || true" ignoreErrors:正确 02_restartbeat: 命令:"supervisorctl -c/opt/python/etc/supervisord.conf重新启动celerybeat" leader_only:是```需要注意的一件事:我的项目celery.py
文件位于config
目录中,这就是为什么我在启动 celery worker 和 celery beat 时写-A config.celery:app
的原因>
commands: 01_killotherbeats: command: "ps auxww | grep 'celery beat' | awk '{print $2}' | sudo xargs kill -9 || true" ignoreErrors: true 02_restartbeat: command: "supervisorctl -c /opt/python/etc/supervisord.conf restart celerybeat" leader_only: true```One thing to focus attention on: in my project celery.py
file is in the config
directory, that is why I write -A config.celery:app
when start celery worker and celery beat
这篇关于在Elastic Beanstalk上启动SQS芹菜工人的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!