问题描述
我复制从S3文件到Cloudfiles,我想,以避免该文件写入磁盘。 Python的-Cloudfiles图书馆有一个object.stream()调用看起来是我需要什么,但我不能找到博托等效呼叫。我希望我能做到这样的:
I'm copying a file from S3 to Cloudfiles, and I would like to avoid writing the file to disk. The Python-Cloudfiles library has an object.stream() call that looks to be what I need, but I can't find an equivalent call in boto. I'm hoping that I would be able to do something like:
shutil.copyfileobj(s3Object.stream(),rsObject.stream())
这可能与博托(或我想任何其他S3库)?
Is this possible with boto (or I suppose any other s3 library)?
推荐答案
在博托重点对象,从而重新presents对象在S3中,可以将其用作一个迭代器,所以你应该能够做这样的事
The Key object in boto, which represents on object in S3, can be used like an iterator so you should be able to do something like this:
>>> import boto
>>> c = boto.connect_s3()
>>> bucket = c.lookup('garnaat_pub')
>>> key = bucket.lookup('Scan1.jpg')
>>> for bytes in key:
... write bytes to output stream
或者,在你的例子中,你可以做:
Or, as in the case of your example, you could do:
>>> shutil.copyfileobj(key, rsObject.stream())
这篇关于如何使用博托流文件了亚马逊S3来Rackspace公司Cloudfiles?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!