我正在使用scrappy对一个站点进行爬网,该站点似乎在每个URL末尾的查询字符串中附加随机值。这将把爬行变成一种无限循环。
如何使scrapy忽略URL的查询字符串部分?

最佳答案

urllib.urlparse
示例代码:

from urlparse import urlparse
o = urlparse('http://url.something.com/bla.html?querystring=stuff')

url_without_query_string = o.scheme + "://" + o.netloc + o.path

示例输出:
Python 2.6.1 (r261:67515, Jun 24 2010, 21:47:49)
[GCC 4.2.1 (Apple Inc. build 5646)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> from urlparse import urlparse
>>> o = urlparse('http://url.something.com/bla.html?querystring=stuff')
>>> url_without_query_string = o.scheme + "://" + o.netloc + o.path
>>> print url_without_query_string
http://url.something.com/bla.html
>>>

关于python - 如何从网址中删除查询?,我们在Stack Overflow上找到一个类似的问题:https://stackoverflow.com/questions/8567171/

10-16 04:37