我正在尝试应用Scrapy(scrapyjs)来爬网带有脚本的页面,以获取完整的加载页面。
我将splash + scrapy应用于以下代码。
与直接使用localhost:8050服务器的参数完全相同

   script = """
    function main(splash)
      local url = splash.args.url
      assert(splash:go(url))
      assert(splash:wait(0.5))
      return {
        html = splash:html(),
        png = splash:png(),
        har = splash:har(),
      }
    end
    """

    splash_args = {
        'wait': 0.5,
        'url': response.url,
        'images': 1,
        'expand': 1,
        'timeout': 60.0,
        'lua_source': script
    }

    yield SplashRequest(response.url,
                        self.parse_list_other_page,
                        cookies=response.request.cookies,
                        args=splash_args)


响应html不包含我需要的元素,但是如果我直接在localhost:8050上使用它,则启动服务器将运行良好。

您知道问题出在哪里吗?

This is my settings.py
    SPLASH_URL = 'http://127.0.0.1:8050'
    SPIDER_MIDDLEWARES = {
        'scrapy_splash.SplashDeduplicateArgsMiddleware': 100,
    }

    # Enable or disable downloader middlewares
    # See http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html
    DOWNLOADER_MIDDLEWARES = {
        'scrapy_splash.SplashCookiesMiddleware': 723,
        'scrapy_splash.SplashMiddleware': 725,
        # scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware': 750,
        'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware': 810,
    }

    # Crawl responsibly by identifying yourself (and your website) on the
    user-agent
    USER_AGENT = "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_1)
    AppleWebKit/537.36 (KHTML, like Gecko) Chrome/38.0.2125.111
    Safari/537.36"

    SPIDER_MIDDLEWARES = {
        'scrapy_splash.SplashDeduplicateArgsMiddleware': 100,
    }

    # Enable or disable downloader middlewares
    # See http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html
    DOWNLOADER_MIDDLEWARES = {
        'scrapy_splash.SplashCookiesMiddleware': 723,
        'scrapy_splash.SplashMiddleware': 725,
        # scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware': 750,


'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware': 810,
}

最佳答案

默认端点是'render.json';要使用'lua_source'参数(即运行Lua脚本),您必须使用'execute'端点:

yield SplashRequest(response.url, endpoint='execute',
                    self.parse_list_other_page,
                    cookies=response.request.cookies,
                    args=splash_args)

关于python - ScrapyJs(scrapy + splash)无法加载脚本,但是splash服务器运行良好,我们在Stack Overflow上找到一个类似的问题:https://stackoverflow.com/questions/43918648/

10-13 06:43