本文介绍了scrapy:请求 url 必须是 str 或 unicode,得到了选择器的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在使用 Scrapy 编写蜘蛛程序,以抓取 Pinterest 的用户详细信息.我正在尝试获取用户及其关注者的详细信息(依此类推,直到最后一个节点).

I am writing a spider using Scrapy, to scrape user details of Pinterest. I am trying to get the details of user and his followers ( and so on until the last node).

以下是蜘蛛代码:

从scrapy.spider 导入BaseSpider

from scrapy.spider import BaseSpider

导入scrapy从 pinners.items 导入 PinterestItem从scrapy.http 导入FormRequest从 urlparse 导入 urlparse

import scrapyfrom pinners.items import PinterestItemfrom scrapy.http import FormRequestfrom urlparse import urlparse

类示例(BaseSpider):

class Sample(BaseSpider):

name = 'sample'
allowed_domains = ['pinterest.com']
start_urls = ['https://www.pinterest.com/banka/followers', ]

def parse(self, response):
    for base_url in response.xpath('//div[@class="Module User gridItem"]/a/@href'):
        list_a = response.urljoin(base_url.extract())
        for new_urls in response.xpath('//div[@class="Module User gridItem"]/a/@href'):
            yield scrapy.Request(new_urls, callback=self.Next)
    yield scrapy.Request(list_a, callback=self.Next)

def Next(self, response):
    href_base = response.xpath('//div[@class = "tabs"]/ul/li/a')
    href_board = href_base.xpath('//div[@class="BoardCount Module"]')
    href_pin = href_base.xpath('.//div[@class="Module PinCount"]')
    href_like = href_base.xpath('.//div[@class="LikeCount Module"]')
    href_followers = href_base.xpath('.//div[@class="FollowerCount Module"]')
    href_following = href_base.xpath('.//div[@class="FollowingCount Module"]')
    item = PinterestItem()
    item["Board_Count"] = href_board.xpath('.//span[@class="value"]/text()').extract()[0]
    item["Pin_Count"] = href_pin.xpath('.//span[@class="value"]/text()').extract()
    item["Like_Count"] = href_like.xpath('.//span[@class="value"]/text()').extract()
    item["Followers_Count"] = href_followers.xpath('.//span[@class="value"]/text()').extract()
    item["Following_Count"] = href_following.xpath('.//span[@class="value"]/text()').extract()
    item["User_ID"] = response.xpath('//link[@rel="canonical"]/@href').extract()[0]
    yield item

我收到以下错误:

raise TypeError('Request url must be str or unicode, got %s:' % type(url).__name__)
TypeError: Request url must be str or unicode, got Selector:

我确实检查了 list_a 的类型(提取的网址).它给了我 unicode.

I did check the type of the list_a ( urls extracted). It gives me unicode.

推荐答案

该错误是parse方法内部for循环产生的:

the error is generated by the inner for loop in the parse method:

for new_urls in response.xpath('//div[@class="Module User gridItem"]/a/@href'):
        yield scrapy.Request(new_urls, callback=self.Next)

new_urls 变量实际上是一个选择器,请尝试这样的操作:

the new_urls variable is actually a selector, please try something like this:

for base_url in response.xpath('//div[@class="Module User gridItem"]/a/@href'):
    list_a = response.urljoin(base_url.extract())        
    yield scrapy.Request(list_a, callback=self.Next)

这篇关于scrapy:请求 url 必须是 str 或 unicode,得到了选择器的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!

10-27 03:19