问题描述
我正在尝试通过scrapyd-deploy 将scrapy 项目部署到远程scrapyd 服务器.当我通过 git push prod 将项目部署到远程服务器时,该项目本身可以正常运行,并且可以在我的本地计算机和远程服务器上完美运行.
I am trying to deploy a scrapy project via scrapyd-deploy to a remote scrapyd server. The project itself is functional and works perfectly on my local machine and on the remote server when I deploy it via git push prod to the remote server.
使用scrapyd-deploy 时出现此错误:
With scrapyd-deploy I get this error:
%scrapyd-deploy 示例 -p apo
{ "node_name": "spider1",
"status": "error",
"message": "/usr/local/lib/python3.8/dist-packages/scrapy/utils/project.py:90: ScrapyDeprecationWarning: Use of environment variables prefixed with SCRAPY_ to override settings is deprecated. The following environment variables are currently defined: EGG_VERSION\n warnings.warn(\nTraceback (most recent call last):\n File \"/usr/lib/python3.8/runpy.py\", line 193, in _run_module_as_main\n return _run_code(code, main_globals, None,\n File \"/usr/lib/python3.8/runpy.py\", line 86, in _run_code\n exec(code, run_globals)\n File \"/usr/local/lib/python3.8/dist-packages/scrapyd/runner.py\", line 40, in <module>\n main()\n File \"/usr/local/lib/python3.8/dist-packages/scrapyd/runner.py\", line 37, in main\n execute()\n File \"/usr/local/lib/python3.8/dist-packages/scrapy/cmdline.py\", line 142, in execute\n cmd.crawler_process = CrawlerProcess(settings)\n File \"/usr/local/lib/python3.8/dist-packages/scrapy/crawler.py\", line 280, in __init__\n super(CrawlerProcess, self).__init__(settings)\n File \"/usr/local/lib/python3.8/dist-packages/scrapy/crawler.py\", line 152, in __init__\n self.spider_loader = self._get_spider_loader(settings)\n File \"/usr/local/lib/python3.8/dist-packages/scrapy/crawler.py\", line 146, in _get_spider_loader\n return loader_cls.from_settings(settings.frozencopy())\n File \"/usr/local/lib/python3.8/dist-packages/scrapy/spiderloader.py\", line 60, in from_settings\n return cls(settings)\n File \"/usr/local/lib/python3.8/dist-packages/scrapy/spiderloader.py\", line 24, in __init__\n self._load_all_spiders()\n File \"/usr/local/lib/python3.8/dist-packages/scrapy/spiderloader.py\", line 46, in _load_all_spiders\n for module in walk_modules(name):\n File \"/usr/local/lib/python3.8/dist-packages/scrapy/utils/misc.py\", line 77, in walk_modules\n submod = import_module(fullpath)\n File \"/usr/lib/python3.8/importlib/__init__.py\",
line 127, in import_module\n return _bootstrap._gcd_import(name[level:], package, level)\n File \"<frozen importlib._bootstrap>\",
line 1014, in _gcd_import\n File \"<frozen importlib._bootstrap>\",
line 991, in _find_and_load\n File \"<frozen importlib._bootstrap>\",
line 975, in _find_and_load_unlocked\n File \"<frozen importlib._bootstrap>\",
line 655, in _load_unlocked\n File \"<frozen importlib._bootstrap>\",
line 618, in _load_backward_compatible\n File \"<frozen zipimport>\",
line 259, in load_module\n File \"/tmp/apo-v1.0.2-114-g8a2f218-master-5kgzxesk.egg/bid/spiders/allaboutwatches.py\",
line 31, in <module>\n File \"/tmp/apo-v1.0.2-114-g8a2f218-master-5kgzxesk.egg/bid/spiders/allaboutwatches.py\",
line 36, in GetbidSpider\n File \"/tmp/apo-v1.0.2-114-g8a2f218-master-5kgzxesk.egg/bid/act_functions.py\",
line 10, in create_image_dir\nNotADirectoryError: [Errno 20]
Not a directory: '/tmp/apo-v1.0.2-114-g8a2f218-master-5kgzxesk.egg/bid/../images/allaboutwatches'\n"}
我的猜测是它必须对我正在调用的这个方法做一些事情,因为一旦我注释掉方法调用,部分错误就会消失:
My guess is that it has to do something with this method I am calling, since part of the error disapears once I outcomment the method call:
# function will create a custom name directory to hold the images of each crawl
def create_image_dir(name):
project_dir = os.path.dirname(__file__)+'/../' #<-- absolute dir the script is in
img_dir = project_dir+"images/"+name
if not os.path.exists(img_dir):
os.mkdir(img_dir);
custom_settings = {
'IMAGES_STORE': img_dir ,
}
return custom_settings
同样适用于这种方法:
def brandnames():
brands = dict()
script_dir = os.path.dirname(__file__) #<-- absolute dir the script is in
rel_path = "imports/brand_names.csv"
abs_file_path = os.path.join(script_dir, rel_path)
with open(abs_file_path, newline='') as csvfile:
reader = csv.DictReader(csvfile, delimiter=';', quotechar='"')
for row in reader:
brands[row['name'].lower()] = row['name']
return brands
如何更改方法或部署配置并保持我的功能不变?
How can I change the method or the deploy config and keep my functionality as it is?
推荐答案
os.mkdir
可能会失败,因为它无法创建嵌套目录.您可以改用os.makedirs(img_dir,exist_ok=True)
.os.path.dirname(__file__)
将指向scrapyd 下的/tmp/....不确定这是否是您想要的.我会使用绝对路径而不调用os.path.dirname(__file__)
如果您不想在/tmp 下下载图像.os.mkdir
might be failing because it cannot create nested directories. You can useos.makedirs(img_dir, exist_ok=True)
instead.os.path.dirname(__file__)
will point to /tmp/... under scrapyd. Not sure if this is what you want. I would use an absolute path without callingos.path.dirname(__file__)
if you don't want images to be downloaded under /tmp.
这篇关于Scrapyd-Deploy:由于使用操作系统路径设置目录而导致的错误的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!