我正在使用scrapy在Amazon EC2实例上刮取100mb的XML feed。但是我被困住了,因为它在运行时会谈论内存错误。我正在使用的编码器建议将100mb文件分解为更易于管理的块,但是我确信必须有更好的方法来做到这一点。
日志:
File "/usr/local/lib/python2.7/dist-packages/scrapy/utils/iterators.py", line 22, in xmliter
text = body_or_str(obj)
File "/usr/local/lib/python2.7/dist-packages/scrapy/utils/response.py", line 22, in body_or_str
return obj.body_as_unicode() if unicode else obj.body
File "/usr/local/lib/python2.7/dist-packages/scrapy/http/response/text.py", line 62, in body_as_unicode
self._cached_ubody = html_to_unicode(charset, self.body)[1]
File "/usr/local/lib/python2.7/dist-packages/w3lib/encoding.py", line 173, in html_to_unicode
return enc, to_unicode(html_body_str, enc)
File "/usr/local/lib/python2.7/dist-packages/w3lib/encoding.py", line 118, in to_unicode
return data_str.decode(encoding, 'w3lib_replace')
File "/usr/lib/python2.7/encodings/cp1252.py", line 15, in decode
return codecs.charmap_decode(input,errors,decoding_table)
exceptions.MemoryError:
2013-08-08 17:53:29+0000 [site] INFO: Closing spider (finished)
2013-08-08 17:53:29+0000 [site] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 241,
'downloader/request_count': 1,
'downloader/request_method_count/GET': 1,
'downloader/response_bytes': 103257370,
'downloader/response_count': 1,
'downloader/response_status_count/200': 1,
'finish_reason': 'finished',
'finish_time': datetime.datetime(2013, 8, 8, 17, 53, 29, 166687),
'log_count/DEBUG': 7,
'log_count/ERROR': 1,
'log_count/INFO': 4,
'response_received_count': 1,
'scheduler/dequeued': 1,
'scheduler/dequeued/memory': 1,
'scheduler/enqueued': 1,
'scheduler/enqueued/memory': 1,
'spider_exceptions/MemoryError': 1,
'start_time': datetime.datetime(2013, 8, 8, 17, 53, 26, 375069)}
2013-08-08 17:53:29+0000 [site] INFO: Spider closed (finished)
我的问题是,有什么我可以做的,以便我可以处理100mb的文件而不会遇到内存问题?
最佳答案
scrapy
始终尝试将整个输入数据解码为Unicode。在典型的宽Unicode构建中,这意味着100MB HTML页面将扩展为400MB。
那么,如何解决呢?