在Windows 7上重新安装Miniconda 64位exe安装程序和Python 2.7之后,通过它,我得到了Scrapy,下面是安装的内容:
蟒蛇2.7.12
废料1.1.1
扭曲16.4.1
这个最小的代码,从“python scrapy_test.py”运行(使用scrapy API):
#!/usr/bin/env python2.7
# -*- coding: utf-8 -*-
import scrapy.spiders.crawl
import scrapy.crawler
import scrapy.utils.project
class MySpider(scrapy.spiders.crawl.CrawlSpider) :
name = "stackoverflow.com"
allowed_domains = ["stackoverflow.com"]
start_urls = ["http://stackoverflow.com/"]
download_delay = 1.5
def __init__(self, my_arg = None) :
print "def __init__"
self.my_arg = my_arg
print "self.my_arg"
print self.my_arg
def parse(self, response) :
pass
def main() :
my_arg = "Value"
process = scrapy.crawler.CrawlerProcess(scrapy.utils.project.get_project_settings())
process.crawl(MySpider(my_arg))
process.start()
if __name__ == "__main__" :
main()
给出这个输出:
[scrapy] INFO: Scrapy 1.1.1 started (bot: scrapy_project)
[scrapy] INFO: Overridden settings: {'NEWSPIDER_MODULE': 'scrapy_project.spiders', 'SPIDER_MODULES': ['scrapy_project.spiders'], 'ROBOTSTXT_OBEY': True, 'BOT_NAME': 'scrapy_project'}
def __init__
self.my_arg
Value
[scrapy] INFO: Enabled extensions:
['scrapy.extensions.logstats.LogStats',
'scrapy.extensions.telnet.TelnetConsole',
'scrapy.extensions.corestats.CoreStats']
def __init__
self.my_arg
None
[...]
注意init方法是如何运行两次的,以及存储的参数在第二次运行后如何变成None,这不是我想要的。这应该发生吗??
如果我改变:
def __init__(self, my_arg = None) :
致:
def __init__(self, my_arg) :
输出为:
[...]
Unhandled error in Deferred:
[twisted] CRITICAL: Unhandled error in Deferred:
Traceback (most recent call last):
File "scrapy_test.py", line 28, in main
process.crawl(MySpider(my_arg))
File "C:\Users\XYZ\Miniconda2\lib\site-packages\scrapy\crawler.py", line 163, in crawl
return self._crawl(crawler, *args, **kwargs)
File "C:\Users\XYZ\Miniconda2\lib\site-packages\scrapy\crawler.py", line 167, in _crawl
d = crawler.crawl(*args, **kwargs)
File "C:\Users\XYZ\Miniconda2\lib\site-packages\twisted\internet\defer.py", line 1331, in unwindGenerator
return _inlineCallbacks(None, gen, Deferred())
--- <exception caught here> ---
File "C:\Users\XYZ\Miniconda2\lib\site-packages\twisted\internet\defer.py", line 1185, in _inlineCallbacks
result = g.send(result)
File "C:\Users\XYZ\Miniconda2\lib\site-packages\scrapy\crawler.py", line 90, in crawl
six.reraise(*exc_info)
File "C:\Users\XYZ\Miniconda2\lib\site-packages\scrapy\crawler.py", line 71, in crawl
self.spider = self._create_spider(*args, **kwargs)
File "C:\Users\XYZ\Miniconda2\lib\site-packages\scrapy\crawler.py", line 94, in _create_spider
return self.spidercls.from_crawler(self, *args, **kwargs)
File "C:\Users\XYZ\Miniconda2\lib\site-packages\scrapy\spiders\crawl.py", line 96, in from_crawler
spider = super(CrawlSpider, cls).from_crawler(crawler, *args, **kwargs)
File "C:\Users\XYZ\Miniconda2\lib\site-packages\scrapy\spiders\__init__.py", line 50, in from_crawler
spider = cls(*args, **kwargs)
exceptions.TypeError: __init__() takes exactly 2 arguments (1 given)
[twisted] CRITICAL:
Traceback (most recent call last):
File "C:\Users\XYZ\Miniconda2\lib\site-packages\twisted\internet\defer.py", line 1185, in _inlineCallbacks
result = g.send(result)
File "C:\Users\XYZ\Miniconda2\lib\site-packages\scrapy\crawler.py", line 90, in crawl
six.reraise(*exc_info)
File "C:\Users\XYZ\Miniconda2\lib\site-packages\scrapy\crawler.py", line 71, in crawl
self.spider = self._create_spider(*args, **kwargs)
File "C:\Users\XYZ\Miniconda2\lib\site-packages\scrapy\crawler.py", line 94, in _create_spider
return self.spidercls.from_crawler(self, *args, **kwargs)
File "C:\Users\XYZ\Miniconda2\lib\site-packages\scrapy\spiders\crawl.py", line 96, in from_crawler
spider = super(CrawlSpider, cls).from_crawler(crawler, *args, **kwargs)
File "C:\Users\XYZ\Miniconda2\lib\site-packages\scrapy\spiders\__init__.py", line 50, in from_crawler
spider = cls(*args, **kwargs)
TypeError: __init__() takes exactly 2 arguments (1 given)
不知道怎么解决这个问题。知道吗?
最佳答案
以下是scrapy.crawler.CrawlerProcess.crawl()
的方法定义:crawl(crawler_or_spidercls, *args, **kwargs)
crawler_或_spidercls(Crawler
实例,Spider
子类或字符串)–已创建crawler,或spider类或spider的名称
在项目内部创建
args(list)–初始化蜘蛛的参数
kwargs(dict)–初始化spider的关键字参数
这意味着您应该将Spider
的名称与初始化saidkwargs
所需的Spider
分开传递,如下所示:
process.crawl(MySpider, my_arg = 'Value')
关于python - Scrapy API-Spider类的init参数设为None,我们在Stack Overflow上找到一个类似的问题:https://stackoverflow.com/questions/39639568/