我正在使用一个简单的 CrawlSpider 实现来抓取网站。默认情况下,Scrapy 遵循 302 重定向到目标位置,并忽略最初请求的链接。在一个特定的站点上,我遇到了一个 302 重定向到另一个页面的页面。我的目标是记录原始链接(响应 302)和目标位置(在 HTTP 响应 header 中指定),并在 parse_itemCrawlSpider 方法中处理它们。请指导我,我怎样才能做到这一点?

我遇到了提到使用 dont_redirect=TrueREDIRECT_ENABLE=False 的解决方案,但我实际上不想忽略重定向,事实上我也想考虑(即不忽略)重定向页面。

例如:我访问 http://www.example.com/page1 ,它发送 302 重定向 HTTP 响应并重定向到 http://www.example.com/page2 。默认情况下,scrapy 忽略 page1 ,跟随 page2 并处理它。我想在 page1 中处理 page2parse_item

编辑
我已经在 Spider 的类定义中使用 handle_httpstatus_list = [500, 404] 来处理 500 中的 404parse_item 响应代码,但是如果我在 302 中指定它,那么同样不适用于 handle_httpstatus_list

最佳答案

Scrapy 1.0.5(我写这些行时最新的官方版本)在内置的 RedirectMiddleware 中没有使用 handle_httpstatus_list - 请参阅 this issue
从 Scrapy 1.1.0 ( 1.1.0rc1 is available ) 开始, the issue is fixed

即使禁用重定向,您仍然可以在回调中模仿其行为,检查 Location header 并将 Request 返回给重定向

示例蜘蛛:

$ cat redirecttest.py
import scrapy


class RedirectTest(scrapy.Spider):

    name = "redirecttest"
    start_urls = [
        'http://httpbin.org/get',
        'https://httpbin.org/redirect-to?url=http%3A%2F%2Fhttpbin.org%2Fip'
    ]
    handle_httpstatus_list = [302]

    def start_requests(self):
        for url in self.start_urls:
            yield scrapy.Request(url, dont_filter=True, callback=self.parse_page)

    def parse_page(self, response):
        self.logger.debug("(parse_page) response: status=%d, URL=%s" % (response.status, response.url))
        if response.status in (302,) and 'Location' in response.headers:
            self.logger.debug("(parse_page) Location header: %r" % response.headers['Location'])
            yield scrapy.Request(
                response.urljoin(response.headers['Location']),
                callback=self.parse_page)

控制台日志:
$ scrapy runspider redirecttest.py -s REDIRECT_ENABLED=0
[scrapy] INFO: Scrapy 1.0.5 started (bot: scrapybot)
[scrapy] INFO: Optional features available: ssl, http11
[scrapy] INFO: Overridden settings: {'REDIRECT_ENABLED': '0'}
[scrapy] INFO: Enabled extensions: CloseSpider, TelnetConsole, LogStats, CoreStats, SpiderState
[scrapy] INFO: Enabled downloader middlewares: HttpAuthMiddleware, DownloadTimeoutMiddleware, UserAgentMiddleware, RetryMiddleware, DefaultHeadersMiddleware, MetaRefreshMiddleware, HttpCompressionMiddleware, CookiesMiddleware, ChunkedTransferMiddleware, DownloaderStats
[scrapy] INFO: Enabled spider middlewares: HttpErrorMiddleware, OffsiteMiddleware, RefererMiddleware, UrlLengthMiddleware, DepthMiddleware
[scrapy] INFO: Enabled item pipelines:
[scrapy] INFO: Spider opened
[scrapy] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
[scrapy] DEBUG: Telnet console listening on 127.0.0.1:6023
[scrapy] DEBUG: Crawled (200) <GET http://httpbin.org/get> (referer: None)
[redirecttest] DEBUG: (parse_page) response: status=200, URL=http://httpbin.org/get
[scrapy] DEBUG: Crawled (302) <GET https://httpbin.org/redirect-to?url=http%3A%2F%2Fhttpbin.org%2Fip> (referer: None)
[redirecttest] DEBUG: (parse_page) response: status=302, URL=https://httpbin.org/redirect-to?url=http%3A%2F%2Fhttpbin.org%2Fip
[redirecttest] DEBUG: (parse_page) Location header: 'http://httpbin.org/ip'
[scrapy] DEBUG: Crawled (200) <GET http://httpbin.org/ip> (referer: https://httpbin.org/redirect-to?url=http%3A%2F%2Fhttpbin.org%2Fip)
[redirecttest] DEBUG: (parse_page) response: status=200, URL=http://httpbin.org/ip
[scrapy] INFO: Closing spider (finished)

请注意,您需要包含 302 的 http_handlestatus_list ,否则,您将看到这种日志(来自 HttpErrorMiddleware ):
[scrapy] DEBUG: Crawled (302) <GET https://httpbin.org/redirect-to?url=http%3A%2F%2Fhttpbin.org%2Fip> (referer: None)
[scrapy] DEBUG: Ignoring response <302 https://httpbin.org/redirect-to?url=http%3A%2F%2Fhttpbin.org%2Fip>: HTTP status code is not handled or not allowed

关于redirect - Scrapy 处理 302 响应代码,我们在Stack Overflow上找到一个类似的问题:https://stackoverflow.com/questions/35330707/

10-12 22:20