我正在使用Scrapy抓取数据this site。我需要从getlink
呼叫parse
。使用yield
时,正常呼叫无法正常工作,我收到此错误:
2015-11-16 10:12:34 [scrapy] ERROR: Spider must return Request, BaseItem, dict or None, got 'generator' in <GET https://www.coldwellbankerhomes.com/fl/miami-dad
电子县/ kvc-17_1,17_3,17_2,17_8 / incl-22 />
从
getlink
返回parse
函数有效,但是即使返回后我也需要执行一些代码。我很困惑,任何帮助都是真的。 # -*- coding: utf-8 -*-
from scrapy.spiders import BaseSpider
from scrapy.selector import Selector
from scrapy.http import Request,Response
import re
import csv
import time
from selenium import webdriver
class ColdWellSpider(BaseSpider):
name = "cwspider"
allowed_domains = ["coldwellbankerhomes.com"]
#start_urls = [''.join(row).strip() for row in csv.reader(open("remaining_links.csv"))]
#start_urls = ['https://www.coldwellbankerhomes.com/fl/boynton-beach/5451-verona-drive-unit-d/pid_9266204/']
start_urls = ['https://www.coldwellbankerhomes.com/fl/miami-dade-county/kvc-17_1,17_3,17_2,17_8/incl-22/']
def parse(self,response):
#browser = webdriver.PhantomJS(service_args=['--ignore-ssl-errors=true', '--load-images=false'])
browser = webdriver.Firefox()
browser.maximize_window()
browser.get(response.url)
time.sleep(5)
#to extract all the links from a page and send request to those links
#this works but even after returning i need to execute the while loop
return self.getlink(response)
#for clicking the load more button in the page
while True:
try:
browser.find_element_by_class_name('search-results-load-more').find_element_by_tag_name('a').click()
time.sleep(3)
self.getlink(response)
except:
break
def getlink(self,response):
print 'hhelo'
c = open('data_getlink.csv', 'a')
d = csv.writer(c, lineterminator='\n')
print 'hello2'
listclass = response.xpath('//div[@class="list-items"]/div[contains(@id,"snapshot")]')
for l in listclass:
link = 'http://www.coldwellbankerhomes.com/'+''.join(l.xpath('./h2/a/@href').extract())
d.writerow([link])
yield Request(url = str(link),callback=self.parse_link)
#callback function of Request
def parse_link(self,response):
b = open('data_parselink.csv', 'a')
a = csv.writer(b, lineterminator='\n')
a.writerow([response.url])
最佳答案
Spider必须返回Request,BaseItem,dict或None,获得了“生成器”getlink()
是生成器。您试图从yield
生成器中parse()
它。
相反,您可以/应该遍历getlink()
调用的结果:
def parse(self, response):
browser = webdriver.Firefox()
browser.maximize_window()
browser.get(response.url)
time.sleep(5)
while True:
try:
for request in self.getlink(response):
yield request
browser.find_element_by_class_name('search-results-load-more').find_element_by_tag_name('a').click()
time.sleep(3)
except:
break
另外,我注意到您同时具有
self.getlink(response)
和self.getlink(browser)
。后者不起作用,因为webdriver实例上没有xpath()
方法-您可能是想从由webdriver控制的浏览器加载的页面源中make a Scrapy Selector
,例如:selector = scrapy.Selector(text=browser.page_source)
self.getlink(selector)
您还应该查看Explicit Waits with Expected Conditions,而不是通过
time.sleep()
使用不可靠且缓慢的人为延迟。另外,我不确定您手动写CSV而不是使用内置的Scrapy Items和Item Exporters的原因是什么。而且,您没有正确关闭文件,也没有使用
with()
上下文管理器。此外,尝试捕获更具体的异常和avoid having a bare try/expect block。
关于python - python多次返回,我们在Stack Overflow上找到一个类似的问题:https://stackoverflow.com/questions/33728743/