我有一个关于刮y的非常简单的问题。我想抓取一个带有start_url作为www.example.com/1的网站。然后,我要转到www.example.com/2和www.example.com/3,依此类推。我知道这应该很简单,但是,怎么办呢?

这是我的刮板,再简单不过了:

import scrapy


class QuotesSpider(scrapy.Spider):
    name = "scraper"
    start_urls = [
        'http://www.example.com/1',
    ]

    def parse(self, response):
        for quote in response.css('#Ficha'):
            yield {
                'item_1': quote.css('div.ficha_med > div > h1').extract(),
            }


现在,我怎么去http://www.example.com/2

最佳答案

start_requests方法添加到您的类中,并根据需要生成这些请求:

import scrapy

class QuotesSpider(scrapy.Spider):

    name = "scraper"

    def start_requests(self):
        n = ???                          # set the limit here
        for i in range(1, n):
            yield scrapy.Request('http://www.example.com/{}'.format(i), self.parse)

    def parse(self, response):
        for quote in response.css('#Ficha'):
            yield {
                'item_1': quote.css('div.ficha_med > div > h1').extract(),
            }




另一个选择是,您可以在start_urls参数中放置多个URL:

class QuotesSpider(scrapy.Spider):
    name = "scraper"
    start_urls = ['http://www.example.com/{}'.format(i) for i in range(1, 100)]
                                                 # choose your limit here ^^^

    def parse(self, response):
        for quote in response.css('#Ficha'):
            yield {
                'item_1': quote.css('div.ficha_med > div > h1').extract(),
            }

关于python - Scrapy:抓取成功的网址,我们在Stack Overflow上找到一个类似的问题:https://stackoverflow.com/questions/45001388/

10-14 19:11
查看更多