问题描述
我正在尝试使用Python 2从服务器下载大文件:
I'm trying to download a large file from a server with Python 2:
req = urllib2.Request("https://myserver/mylargefile.gz")
rsp = urllib2.urlopen(req)
data = rsp.read()
服务器使用Transfer-Encoding:chunked发送数据,我只收到一些二进制数据,不能通过gunzip解压缩。
The server sends data with "Transfer-Encoding: chunked" and I'm only getting some binary data, which cannot be unpacked by gunzip.
我必须遍历多个read()吗?还是多个请求?如果是这样,他们应该怎么样?
Do I have to iterate over multiple read()s? Or multiple requests? If so, how do they have to look like?
注意:我试图解决只有Python 2标准库的问题,没有其他库,如urllib3或请求。这是可能的吗?
Note: I'm trying to solve the problem with only the Python 2 standard library, without additional libraries such as urllib3 or requests. Is this even possible?
推荐答案
如果我没有误会,以下内容对我有用: >
If I'm not mistaken, the following worked for me - a while back:
data = ''
chunk = rsp.read()
while chunk:
data += chunk
chunk = rsp.read()
每个 / code>读取一个块 - 所以继续阅读,直到没有更多的来了。
没有文档准备好支持这个...。
Each read
reads one chunk - so keep on reading until nothing more's coming.Don't have documenation ready supporting this...yet.
这篇关于如何使用Pythons urllib2下载分块数据的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!