无需覆盖file_path函数,蜘蛛程序将使用默认的“请求URL哈希”文件名下载所有图像。但是,当我尝试覆盖该功能时,它将无法正常工作。默认输出属性images中没有任何内容。

我已经尝试了settings.py中的IMAGES_STORE变量的相对路径和绝对路径以及file_path函数都没有用。即使使用完全相同的默认file_path函数覆盖file_path函数,图像也不会下载。

任何帮助将非常感激!

settings.py

BOT_NAME = 'HomeApp2'

SPIDER_MODULES = ['HomeApp2.spiders']
NEWSPIDER_MODULE = 'HomeApp2.spiders'

USER_AGENT = 'Mozilla/5.0 (Windows NT 6.2; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/27.0.1453.93 Safari/537.36'

# ScrapySplash settings
SPLASH_URL = 'http://192.168.99.100:8050'
DOWNLOADER_MIDDLEWARES = {
        'scrapy_splash.SplashCookiesMiddleware': 723,
        'scrapy_splash.SplashMiddleware': 725,
        'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware': 810,
        }
SPIDER_MIDDLEWARES = {
        'scrapy_splash.SplashDeduplicateArgsMiddleware': 100,
        }
DUPEFILTER_CLASS = 'scrapy_splash.SplashAwareDupeFilter'
HTTPCACHE_STORAGE = 'scrapy_splash.SplashAwareFSCacheStorage'

# Configure item pipelines
# See https://docs.scrapy.org/en/latest/topics/item-pipeline.html
ITEM_PIPELINES = {
    'HomeApp2.pipelines.DuplicatesPipeline': 250,
    'HomeApp2.pipelines.ProcessImagesPipeline': 251,
    'HomeApp2.pipelines.HomeApp2Pipeline': 300,
}

IMAGES_STORE = 'files'


pipelines.py

import json
import scrapy
from scrapy.exceptions import DropItem
from scrapy.pipelines.images import ImagesPipeline

class DuplicatesPipeline(object):
    def __init__(self):
        self.sku_seen = set()

    def process_item(self, item, spider):
        if item['sku'] in self.sku_seen:
            raise DropItem("Repeated item found: %s" % item)
        else:
            self.sku_seen.add(item['sku'])
            return item

class ProcessImagesPipeline(ImagesPipeline):

    '''
    def file_path(self, request):
        print('!!!!!!!!!!!!!!!!!!!!!!!!!')
        sku = request.meta['sku']
        num = request.meta['num']
        return '%s/%s.jpg' % (sku, num)
    '''

    def get_media_requests(self, item, info):
        print('- - - - - - - - - - - - - - - - - -')
        sku = item['sku']
        for num, image_url in item['image_urls'].items():
            yield scrapy.Request(url=image_url, meta = {'sku': sku,
                                                        'num': num})

class HomeApp2Pipeline(object):
    def __init__(self):
        self.file = open('items.jl', 'w')

    def process_item(self, item, spider):
        line = json.dumps(dict(item)) + '\n'
        self.file.write(line)
        return item


AppScrape2.py

import scrapy
from scrapy_splash import SplashRequest
from HomeApp2.items import HomeAppItem

class AppScrape2Spider(scrapy.Spider):
    name = 'AppScrape2'

    def start_requests(self):
        yield SplashRequest(
            url = 'https://www.appliancesonline.com.au/product/samsung-sr400lstc-400l-top-mount-fridge?sli_sku_jump=1',
            callback = self.parse,
        )

    def parse(self, response):

        item = HomeAppItem()

        product = response.css('aol-breadcrumbs li:nth-last-of-type(1) .breadcrumb-link ::text').extract_first().rsplit(' ', 1)
        if product is None:
            return {}
        item['sku'] = product[-1]
        item['image_urls'] = {}

        root_url = 'https://www.appliancesonline.com.au'
        product_picture_count = 0
        for pic in response.css('aol-product-media-gallery-main-image-portal img.image'):
            product_picture_count = product_picture_count + 1
            item['image_urls']['p'+str(product_picture_count)] = (
            root_url + pic.css('::attr(src)').extract_first())

        feature_count = 0
        for feat in response.css('aol-product-features .feature'):
            feature_count = feature_count + 1
            item['image_urls']['f'+str(feature_count)] = (
            root_url + feat.css('.feature-image ::attr(src)').extract_first())

        yield item


items.py

import scrapy

class HomeAppItem(scrapy.Item):
    # define the fields for your item here like:
    # name = scrapy.Field()

    sku = scrapy.Field()
    image_urls = scrapy.Field()
    images = scrapy.Field()

    pass

最佳答案

经过反复尝试,我找到了解决方案。它只是将其余参数添加到file_path方法中。

改变中

def file_path(self, request):




def file_path(self, request, response=None, info=None):


似乎我的原始代码错误地覆盖了该方法,从而导致对该方法的调用失败。

关于python - 如何覆盖scrapy 1.7.3中的file_path函数?,我们在Stack Overflow上找到一个类似的问题:https://stackoverflow.com/questions/59222824/

10-13 03:41