问题描述
我已经完全按照 文档.现在我已经更改了 /etc/scrapyd/conf.d/000-default
中的一些配置变量.
I've installed the scrapyd daemon on an EC2 server exactly as described in the documentation. Now I've changed some of the configuration variables in /etc/scrapyd/conf.d/000-default
.
我如何让scrapyd识别这些变化?我认为这涉及重新启动守护程序,但我找不到有关如何执行此操作的任何好的指导.
How do I get scrapyd to recognize those changes? I assume it involves restarting the daemon, but I can't find any good guidance on how to do so.
一个复杂的因素:我有一堆爬网在排队,我不想丢失它们.我认为scrapy知道如何优雅地退出和恢复它们,但是这个功能没有得到很好的记录.有什么指导吗?
One complicating factor: I have a bunch of crawls queued up, and I'd rather not lose them. I think scrapy knows how to quit and resume them gracefully, but this feature isn't well-documented. Any guidance?
推荐答案
结果证明这很简单.
像这样杀死进程:
kill -INT $(cat /var/run/scrapyd.pid)
然后像这样重新启动它:
Then restart it like this:
/usr/bin/python /usr/local/bin/twistd -ny /usr/share/scrapyd/scrapyd.tac -u scrapy -g nogroup --pidfile /var/run/scrapyd.pid -l /var/log/scrapyd/scrapyd.log &
据我所知,这两个命令都需要以 root 身份运行.
As far as I can tell, both commands need to be run as root.
这篇关于如何重新启动scrapyd守护进程?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!