问题描述
我想使用 peewee 将我的数据保存到远程机器.当我运行我的爬虫时,我发现以下错误,
I want to save my data to remote machine by using peewee. When i run my crawler i found following error,
File "/usr/local/lib/python2.7/site-packages/scrapy/commands/crawl.py", line 57, in run
self.crawler_process.crawl(spname, **opts.spargs)
File "/usr/local/lib/python2.7/site-packages/scrapy/crawler.py", line 163, in crawl
return self._crawl(crawler, *args, **kwargs)
File "/usr/local/lib/python2.7/site-packages/scrapy/crawler.py", line 167, in _crawl
d = crawler.crawl(*args, **kwargs)
File "/usr/local/lib/python2.7/site-packages/twisted/internet/defer.py", line 1445, in unwindGenerator
return _inlineCallbacks(None, gen, Deferred())
--- <exception caught here> ---
File "/usr/local/lib/python2.7/site-packages/twisted/internet/defer.py", line 1299, in _inlineCallbacks
result = g.send(result)
File "/usr/local/lib/python2.7/site-packages/scrapy/crawler.py", line 90, in crawl
six.reraise(*exc_info)
File "/usr/local/lib/python2.7/site-packages/scrapy/crawler.py", line 72, in crawl
self.engine = self._create_engine()
File "/usr/local/lib/python2.7/site-packages/scrapy/crawler.py", line 97, in _create_engine
return ExecutionEngine(self, lambda _: self.stop())
File "/usr/local/lib/python2.7/site-packages/scrapy/core/engine.py", line 70, in __init__
self.scraper = Scraper(crawler)
File "/usr/local/lib/python2.7/site-packages/scrapy/core/scraper.py", line 71, in __init__
self.itemproc = itemproc_cls.from_crawler(crawler)
File "/usr/local/lib/python2.7/site-packages/scrapy/middleware.py", line 58, in from_crawler
return cls.from_settings(crawler.settings, crawler)
File "/usr/local/lib/python2.7/site-packages/scrapy/middleware.py", line 34, in from_settings
mwcls = load_object(clspath)
File "/usr/local/lib/python2.7/site-packages/scrapy/utils/misc.py", line 44, in load_object
mod = import_module(module)
File "/usr/local/lib/python2.7/importlib/__init__.py", line 37, in import_module
__import__(name)
File "/app/__main__.egg/annuaire_agence_bio/pipelines.py", line 8, in <module>
exceptions.ImportError: No module named peewee
欢迎提出任何建议.
推荐答案
你不能在 Scrapinhub 上安装你自己选择的模块......你只能根据我的知识安装 MySQLDB,这样做.
You cannot install module of your own choice on Scrapinhub ... You can only install MySQLDB as per my knowledge, to do that.
在项目的主文件夹中创建一个名为 scrapinghub.yml
的文件,内容如下.
Create a file named scrapinghub.yml
in your project's main folder with following contents.
projects:
default: 111149
requirements:
file: requirements.txt
其中111149
是我在scrapinghub 上的项目ID.
Where 111149
is my proejct ID on scrapinghub.
在同一目录中创建另一个名为 requirements.txt
的文件.
Create another file named requirements.txt
in same directory.
然后把你需要的模块和你在那个文件中使用的版本号放在一起,
and put your required modules along with the version number you are using in that file like so,
MySQL-python==1.2.5
PS:我使用的是 MySQLDB 模块,所以我放了它.
PS: I was using MySQLDB module so I put that.
这篇关于如何在scrapinghub中使用peewee的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!