我很难让Spider跟随广告的下一页而不关注它找到的每个链接,最终返回每个craigslist页面。我一直在研究规则,因为我知道问题出在哪里,但我要么只获得第一页,要么在craigslist上获得每一页,要么什么也没有。有什么帮助吗?
这是我当前的代码:
from scrapy.selector import HtmlXPathSelector
from craigslist_sample.items import CraigslistSampleItem
from scrapy.contrib.spiders import CrawlSpider, Rule
from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor
from scrapy.http import Request
class PageSpider(CrawlSpider):
name = "cto"
allowed_domains = ["medford.craigslist.org"]
start_urls = ["http://medford.craigslist.org/cto/"]
rules = (
Rule(
SgmlLinkExtractor(allow_domains=("medford.craigslist.org", )),
callback='parse_page', follow=True
),
)
def parse_page(self, response):
hxs = HtmlXPathSelector(response)
rows = hxs.select('//div[@class="content"]/p[@class="row"]')
for row in rows:
item = CraigslistSampleItem()
link = row.xpath('.//span[@class="pl"]/a')
item['title'] = link.xpath("text()").extract()
item['link'] = link.xpath("@href").extract()
item['price'] = row.xpath('.//span[@class="l2"]/span[@class="price"]/text()').extract()
url = 'http://medford.craigslist.org{}'.format(''.join(item['link']))
yield Request(url=url, meta={'item': item}, callback=self.parse_item_page)
def parse_item_page(self, response):
hxs = HtmlXPathSelector(response)
item = response.meta['item']
item['description'] = hxs.select('//section[@id="postingbody"]/text()').extract()
return item
最佳答案
您应该将allow
参数指定为SgmlLinkExtractor:
允许(一个正则表达式(或列表))–一个正则
(绝对)URL表示的表达式(或正则表达式列表)
必须匹配才能被提取。如果没有给出(或为空),它将
匹配所有链接。
rules = (
Rule(SgmlLinkExtractor(allow='http://medford.craigslist.org/cto/'),
callback='parse_page', follow=True),
)
这会将所有链接保留在
http://medford.craigslist.org/cto/
网址下。希望能有所帮助。
关于python - 使用Scrapy和Python 2.7的递归抓取Craigslist,我们在Stack Overflow上找到一个类似的问题:https://stackoverflow.com/questions/22264141/