我正在尝试在Scrapy Spider中使用urlparse.urljoin来编译要刮擦的URL列表。目前,我的蜘蛛没有返回任何内容,但没有引发任何错误。因此,我试图检查是否正在核心地编译网址。

我的尝试是使用str.join在空闲状态下对此进行测试,如下所示:

>>> href = ['lphs.asp?id=598&city=london',
 'lphs.asp?id=480&city=london',
 'lphs.asp?id=1808&city=london',
 'lphs.asp?id=1662&city=london',
 'lphs.asp?id=502&city=london',]
>>> for x in href:
    base = "http:/www.url-base.com/destination/"
    final_url = str.join(base, x)
    print(final_url)


一行返回的内容:

lhttp:/www.url-base.com/destination/phttp:/www.url-base.com/destination/hhttp:/www.url-base.com/destination/shttp:/www.url-base.com/destination/.http:/www.url-base.com/destination/ahttp:/www.url-base.com/destination/shttp:/www.url-base.com/destination/phttp:/www.url-base.com/destination/?http:/www.url-base.com/destination/ihttp:/www.url-base.com/destination/dhttp:/www.url-base.com/destination/=http:/www.url-base.com/destination/5http:/www.url-base.com/destination/9http:/www.url-base.com/destination/8http:/www.url-base.com/destination/&http:/www.url-base.com/destination/chttp:/www.url-base.com/destination/ihttp:/www.url-base.com/destination/thttp:/www.url-base.com/destination/yhttp:/www.url-base.com/destination/=http:/www.url-base.com/destination/lhttp:/www.url-base.com/destination/ohttp:/www.url-base.com/destination/nhttp:/www.url-base.com/destination/dhttp:/www.url-base.com/destination/ohttp:/www.url-base.com/destination/n

我认为从我的示例中可以明显看出str.join的行为方式不同-如果这样做的话,这就是为什么我的蜘蛛没有关注这些链接的原因! -但是,最好对此进行确认。

如果这不是正确的测试方法,该如何测试此过程?

更新资料
尝试使用以下urlparse.urljoin
从urllib.parse导入urlparse

    >>> from urllib.parse import urlparse
    >>> for x in href:
        base = "http:/www.url-base.com/destination/"
        final_url = urlparse.urljoin(base, x)
        print(final_url)


哪个抛出AttributeError: 'function' object has no attribute 'urljoin'

更新-有问题的蜘蛛功能

def parse_links(self, response):
    room_links = response.xpath('//form/table/tr/td/table//a[div]/@href').extract() # insert xpath which contains the href for the rooms
    for link in room_links:
        base_url = "http://www.example.com/followthrough"
        final_url = urlparse.urljoin(base_url, link)
        print(final_url)
        # This is not joing the final_url right
        yield Request(final_url, callback=parse_links)


更新资料

我只是在空闲状态下再次测试:

>>> from urllib.parse import urljoin
>>> from urllib import parse
>>> room_links = ['lphs.asp?id=562&city=london',
 'lphs.asp?id=1706&city=london',
 'lphs.asp?id=1826&city=london',
 'lphs.asp?id=541&city=london',
 'lphs.asp?id=1672&city=london',
 'lphs.asp?id=509&city=london',
 'lphs.asp?id=428&city=london',
 'lphs.asp?id=614&city=london',
 'lphs.asp?id=336&city=london',
 'lphs.asp?id=412&city=london',
 'lphs.asp?id=611&city=london',]
>>> for link in room_links:
    base_url = "http:/www.url-base.com/destination/"
    final_url = urlparse.urljoin(base_url, link)
    print(final_url)


哪个抛出了这个:

Traceback (most recent call last):
  File "<pyshell#34>", line 3, in <module>
    final_url = urlparse.urljoin(base_url, link)
AttributeError: 'function' object has no attribute 'urljoin'

最佳答案

您会看到以下给出的输出:

for x in href:
    base = "http:/www.url-base.com/destination/"
    final_url = str.join(base, href)   # <-- 'x' instead of 'href' probably intended here
    print(final_url)


urljoin库中的urllib的行为有所不同,请参阅文档。这不是简单的字符串连接。

编辑:
根据您的评论,我想您正在使用Python3。使用该import语句,您可以导入urlparse函数。这就是为什么您会收到该错误。要么直接导入并使用函数:

from urllib.parse import urljoin
...
final_url = urljoin(base, x)


或导入parse模块并使用如下功能:

from urllib import parse
...
final_url = parse.urljoin(base, x)

关于python - Scrapy-urlparse.urljoin的行为与str.join相同吗?,我们在Stack Overflow上找到一个类似的问题:https://stackoverflow.com/questions/46804324/

10-14 18:17
查看更多