我在使用OpenStack Swift客户机库时遇到了关于Python生成器的问题。
目前的问题是,我正试图从一个特定的URL(大约7MB)中检索一个大的数据字符串,将该字符串分成较小的位,然后将一个生成器类发送回来,每次迭代都包含一个分块的字符串位。在测试套件中,这只是一个字符串,它被发送到Swift客户机的monkeyPatched类进行处理。
MonkeyPatched类中的代码如下:
def monkeypatch_class(name, bases, namespace):
'''Guido's monkeypatch metaclass.'''
assert len(bases) == 1, "Exactly one base class required"
base = bases[0]
for name, value in namespace.iteritems():
if name != "__metaclass__":
setattr(base, name, value)
return base
在测试套件中:
from swiftclient import client
import StringIO
import utils
class Connection(client.Connection):
__metaclass__ = monkeypatch_class
def get_object(self, path, obj, resp_chunk_size=None, ...):
contents = None
headers = {}
# retrieve content from path and store it in 'contents'
...
if resp_chunk_size is not None:
# stream the string into chunks
def _object_body():
stream = StringIO.StringIO(contents)
buf = stream.read(resp_chunk_size)
while buf:
yield buf
buf = stream.read(resp_chunk_size)
contents = _object_body()
return headers, contents
返回Generator对象后,存储类中的流函数调用了它:
class SwiftStorage(Storage):
def get_content(self, path, chunk_size=None):
path = self._init_path(path)
try:
_, obj = self._connection.get_object(
self._container,
path,
resp_chunk_size=chunk_size)
return obj
except Exception:
raise IOError("Could not get content: {}".format(path))
def stream_read(self, path):
try:
return self.get_content(path, chunk_size=self.buffer_size)
except Exception:
raise OSError(
"Could not read content from stream: {}".format(path))
最后,在我的测试套件中:
def test_stream(self):
filename = self.gen_random_string()
# test 7MB
content = self.gen_random_string(7 * 1024 * 1024)
self._storage.stream_write(filename, io)
io.close()
# test read / write
data = ''
for buf in self._storage.stream_read(filename):
data += buf
self.assertEqual(content,
data,
"stream read failed. output: {}".format(data))
输出结果如下:
======================================================================
FAIL: test_stream (test_swift_storage.TestSwiftStorage)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/bacongobbler/git/github.com/bacongobbler/docker-registry/test/test_local_storage.py", line 46, in test_stream
"stream read failed. output: {}".format(data))
AssertionError: stream read failed. output: <generator object _object_body at 0x2a6bd20>
我尝试用一个简单的python脚本来隔离这一点,该脚本与上面的代码遵循相同的流程,并且没有问题地传递:
def gen_num():
def _object_body():
for i in range(10000000):
yield i
return _object_body()
def get_num():
return gen_num()
def stream_read():
return get_num()
def main():
num = 0
for i in stream_read():
num += i
print num
if __name__ == '__main__':
main()
对于这个问题的任何帮助都非常感谢:)
最佳答案
在您的get_object
方法中,您将_object_body()
的返回值分配给contents
变量。然而,这个变量也是保存实际数据的变量,它在_object_body
的早期就被使用了。
问题是,_object_body
是一个生成器函数(它使用的是yield
)。因此,当您调用它时,它会生成一个生成器对象,但是函数的代码在您迭代该生成器之前不会开始运行。这意味着,当函数的代码实际开始运行时(在for
中的_test_stream
循环),在您重新分配contents = _object_body()
之后很长时间。
因此,您的stream = StringIO(contents)
创建了一个包含生成器对象的StringIO
对象(因此是错误消息),而不是数据。
下面是一个最小的复制案例,说明了这个问题:
def foo():
contents = "Hello!"
def bar():
print contents
yield 1
# Only create the generator. This line runs none of the code in bar.
contents = bar()
print "About to start running..."
for i in contents:
# Now we run the code in bar, but contents is now bound to
# the generator object. So this doesn't print "Hello!"
pass
关于python - 使用python生成器和openstack swift客户端的问题,我们在Stack Overflow上找到一个类似的问题:https://stackoverflow.com/questions/20429971/