本文介绍了Scrapy start_urls的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

脚本(下图)来自本 教程包含两个 start_urls.

The script (below) from this tutorial contains two start_urls.

from scrapy.spider import Spider
from scrapy.selector import Selector

from dirbot.items import Website

class DmozSpider(Spider):
    name = "dmoz"
    allowed_domains = ["dmoz.org"]
    start_urls = [
        "http://www.dmoz.org/Computers/Programming/Languages/Python/Books/",
        "http://www.dmoz.org/Computers/Programming/Languages/Python/Resources/",
    ]

    def parse(self, response):
        """
        The lines below is a spider contract. For more info see:
        http://doc.scrapy.org/en/latest/topics/contracts.html
        @url http://www.dmoz.org/Computers/Programming/Languages/Python/Resources/
        @scrapes name
        """
        sel = Selector(response)
        sites = sel.xpath('//ul[@class="directory-url"]/li')
        items = []

        for site in sites:
            item = Website()
            item['name'] = site.xpath('a/text()').extract()
            item['url'] = site.xpath('a/@href').extract()
            item['description'] = site.xpath('text()').re('-\s[^\n]*\\r')
            items.append(item)

        return items

但是为什么它只抓取这两个网页?我看到 allowed_domains = ["dmoz.org"] 但这两个页面还包含指向 dmoz.org 域内其他页面的链接!为什么它也不会刮掉它们?

But why does it scrape only these 2 web pages? I see allowed_domains = ["dmoz.org"] but these two pages also contain links to other pages which are within dmoz.org domain! Why doesnt it scrape them too?

推荐答案

start_urls 类属性包含起始 url - 仅此而已.如果你已经提取了你想要抓取的其他页面的 url - yield from parse 回调与 [another] 回调对应的请求:

start_urls class attribute contains start urls - nothing more. If you have extracted urls of other pages you want to scrape - yield from parse callback corresponding requests with [another] callback:

class Spider(BaseSpider):

    name = 'my_spider'
    start_urls = [
                'http://www.domain.com/'
    ]
    allowed_domains = ['domain.com']

    def parse(self, response):
        '''Parse main page and extract categories links.'''
        hxs = HtmlXPathSelector(response)
        urls = hxs.select("//*[@id='tSubmenuContent']/a[position()>1]/@href").extract()
        for url in urls:
            url = urlparse.urljoin(response.url, url)
            self.log('Found category url: %s' % url)
            yield Request(url, callback = self.parseCategory)

    def parseCategory(self, response):
        '''Parse category page and extract links of the items.'''
        hxs = HtmlXPathSelector(response)
        links = hxs.select("//*[@id='_list']//td[@class='tListDesc']/a/@href").extract()
        for link in links:
            itemLink = urlparse.urljoin(response.url, link)
            self.log('Found item link: %s' % itemLink, log.DEBUG)
            yield Request(itemLink, callback = self.parseItem)

    def parseItem(self, response):
        ...

如果您仍想自定义启动请求的创建,请覆盖方法 BaseSpider.start_requests()

If you still want to customize start requests creation, override method BaseSpider.start_requests()

这篇关于Scrapy start_urls的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!

06-11 21:47