本文介绍了python scrapy-输出csv文件为空的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我的主要Spider代码:

My main Spider code:

    from scrapy.spider import BaseSpider
from scrapy.selector import HtmlXPathSelector
from Belray_oil.items import BelrayOilItem

class BelraySpider(BaseSpider):
    name = "Belray_oil"
    allowed_domains = ["mxdirtrider.com/"]
    start_urls = ["http://www.mxdirtrider.com/h-products/bel-ray/2011-02/pr-bel-ray-accessories-lubricant-oil-2-stroke-2t-mineral-engine.htm?ref=search"]

def parse(self, response):
    hxs = HtmlXPathSelector(response)
    name = hxs.select("//div[@id='product-title']/h1/span/text()").extract()
    MSRP = hxs.select("//div[@id='price']/span[1]/text()").extract()
    Sale = hxs.select("//div[@id='price']/span[2]/strong/text()").extract()
    print name, MSRP, Sale

我的项目文件:

    from scrapy.item import Item, Field

class BelrayOilItem(Item):
    name = Field()
    MSRP = Field()
    Sale = Field()

我运行时的终端日志输出:scrapy crawl Belray_oil -o items.csv -t csv

my terminal log output whene I run : scrapy crawl Belray_oil -o items.csv -t csv

    2013-07-05 18:03:25-0400 [scrapy] INFO: Scrapy 0.14.4 started (bot: Belray_oil)
2013-07-05 18:03:25-0400 [scrapy] DEBUG: Enabled extensions: FeedExporter, LogStats, TelnetConsole, CloseSpider, WebService, CoreStats, MemoryUsage, SpiderState
2013-07-05 18:03:25-0400 [scrapy] DEBUG: Enabled downloader middlewares: HttpAuthMiddleware, DownloadTimeoutMiddleware, UserAgentMiddleware, RetryMiddleware, DefaultHeadersMiddleware, RedirectMiddleware, CookiesMiddleware, HttpCompressionMiddleware, ChunkedTransferMiddleware, DownloaderStats
2013-07-05 18:03:25-0400 [scrapy] DEBUG: Enabled spider middlewares: HttpErrorMiddleware, OffsiteMiddleware, RefererMiddleware, UrlLengthMiddleware, DepthMiddleware
2013-07-05 18:03:25-0400 [scrapy] DEBUG: Enabled item pipelines:
2013-07-05 18:03:25-0400 [Belray_oil] INFO: Spider opened
2013-07-05 18:03:25-0400 [Belray_oil] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2013-07-05 18:03:25-0400 [scrapy] DEBUG: Telnet console listening on 0.0.0.0:6023
2013-07-05 18:03:25-0400 [scrapy] DEBUG: Web service listening on 0.0.0.0:6080
2013-07-05 18:03:26-0400 [Belray_oil] DEBUG: Crawled (200) <GET http://www.mxdirtrider.com/h-products/bel-ray/2011-02/pr-bel-ray-accessories-lubricant-oil-2-stroke-2t-mineral-engine.htm?ref=search> (referer: None)
2013-07-05 18:03:26-0400 [Belray_oil] ERROR: Spider error processing <GET http://www.mxdirtrider.com/h-products/bel-ray/2011-02/pr-bel-ray-accessories-lubricant-oil-2-stroke-2t-mineral-engine.htm?ref=search>
    Traceback (most recent call last):
      File "/usr/lib/python2.7/dist-packages/twisted/internet/base.py", line 1182, in mainLoop
        self.runUntilCurrent()
      File "/usr/lib/python2.7/dist-packages/twisted/internet/base.py", line 805, in runUntilCurrent
        call.func(*call.args, **call.kw)
      File "/usr/lib/python2.7/dist-packages/twisted/internet/defer.py", line 381, in callback
        self._startRunCallbacks(result)
      File "/usr/lib/python2.7/dist-packages/twisted/internet/defer.py", line 489, in _startRunCallbacks
        self._runCallbacks()
    --- <exception caught here> ---
      File "/usr/lib/python2.7/dist-packages/twisted/internet/defer.py", line 576, in _runCallbacks
        current.result = callback(current.result, *args, **kw)
      File "/usr/lib/python2.7/dist-packages/scrapy/spider.py", line 62, in parse
        raise NotImplementedError
    exceptions.NotImplementedError:

2013-07-05 18:03:26-0400 [Belray_oil] INFO: Closing spider (finished)
2013-07-05 18:03:26-0400 [Belray_oil] INFO: Dumping spider stats:
    {'downloader/request_bytes': 310,
     'downloader/request_count': 1,
     'downloader/request_method_count/GET': 1,
     'downloader/response_bytes': 13379,
     'downloader/response_count': 1,
     'downloader/response_status_count/200': 1,
     'finish_reason': 'finished',
     'finish_time': datetime.datetime(2013, 7, 5, 22, 3, 26, 204316),
     'scheduler/memory_enqueued': 1,
     'spider_exceptions/NotImplementedError': 1,
     'start_time': datetime.datetime(2013, 7, 5, 22, 3, 25, 970550)}
2013-07-05 18:03:26-0400 [Belray_oil] INFO: Spider closed (finished)
2013-07-05 18:03:26-0400 [scrapy] INFO: Dumping global stats:
    {'memusage/max': 116150272, 'memusage/startup': 116150272}

输出中的CSV始终为空,我无法弄清楚到底是什么问题.请一点帮助!

The CSV in the output is always empty, and I couldn't figure out what's the problem exactly.Please little Help!

推荐答案

您应该在parse方法中返回Item:

from scrapy.spider import BaseSpider
from scrapy.selector import HtmlXPathSelector
from Belray_oil.items import BelrayOilItem


class BelraySpider(BaseSpider):
    name = "Belray_oil"
    allowed_domains = ["mxdirtrider.com"]
    start_urls = [
        "http://www.mxdirtrider.com/h-products/bel-ray/2011-02/pr-bel-ray-accessories-lubricant-oil-2-stroke-2t-mineral-engine.htm?ref=search"]

    def parse(self, response):
        hxs = HtmlXPathSelector(response)
        item = BelrayOilItem()
        item['name'] = hxs.select("//div[@id='product-title']/h1/span/text()").extract()
        item['MSRP'] = hxs.select("//div[@id='price']/span[1]/text()").extract()
        item['Sale'] = hxs.select("//div[@id='price']/span[2]/strong/text()").extract()
        return item

然后,在items.csv中,您将拥有:

Then, in items.csv you'll have:

name,msrp,sale
Bel-Ray 2T Mineral Engine 2-Stroke  ,MSRP $9.75,$8.13

希望有帮助.

这篇关于python scrapy-输出csv文件为空的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!

08-01 05:02