当我在cmd中编写此命令时

抓取抓取引号-o item.csv -a u = test_user_name -a p = test_passporw_name -a urls = http://books.toscrape.com/

显示

引发ValueError('请求URL中缺少方案:%s'%self._url)
ValueError:请求网址中缺少方案:h

# -*- coding: utf-8 -*-
from scrapy.contrib.spiders.init import InitSpider
from scrapy.http import Request, FormRequest
from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor
from scrapy.contrib.spiders import Rule
from scrapy.utils.response import open_in_browser
from scrapy.spider import BaseSpider
from scrapy.selector import HtmlXPathSelector


class QuotesSpider(InitSpider):
    name = 'quotes'
    allowed_domains = ['quotes.toscrape.com']
    login_page='http://quotes.toscrape.com/login'
    start_urls = ['']
    username=''
    password=''

    def __init__(self,u,p,urls):
        self.username=u
        self.password=p
        self.start_urls=urls




    def init_request(self):
        #"""This function is called before crawling starts."""
        return Request(url=self.login_page, callback=self.login)

    def login(self, response):
        csrf_token=response.xpath('//*[@name="csrf_token"]//@value').extract_first()
        return FormRequest.from_response(response,
                                         formdata={'csrf_token': csrf_token,
                                                   'username': self.username,
                                                   'password': self.password,
                                                   },
                                         callback=self.check_login_response)

    def check_login_response(self, response):
        # open_in_browser(response)
        #"""Check the response returned by a login request to see if we aresuccessfully logged in."""
        if "Logout" in response.body:
            self.log("\n\n\nSuccessfully logged in. Let's start crawling!\n\n\n")
            # Now the crawling can begin..

            return self.initialized() # ****THIS LINE FIXED THE LAST PROBLEM*****

        else:
            self.log("\n\n\nFailed, Bad times :(\n\n\n")
            # Something went wrong, we couldn't log in, so nothing happens.

    def parse(self, response):
        open_in_browser(response)

最佳答案

self.start_urls=urls使start_urls成为字符串而不是列表。
这使得该字符串中的每个字符都被解释为url。

只需将start_urls设置为列表,您的代码就可以工作:

self.start_urls = [urls]


另外,您无需将变量初始化为虚拟值,也无需自己解析csrf_token(使用FormRequest.from_response()时将自动完成)


在旁注中,您的代码看起来像是为旧版本的scrapy编写的-大多数导入已被移动,重命名或弃用。
也许您应该通过快速阅读文档来刷新代码。

关于python - Scrapy __init__ arg中的ValueError,我们在Stack Overflow上找到一个类似的问题:https://stackoverflow.com/questions/53569335/

10-12 23:30