本文介绍了您可以使用流而不是本地文件上传到S3吗?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我需要创建一个CSV并将其上传到S3存储桶.由于我是在动态创建文件,因此最好在创建文件时将其直接写入S3存储桶,而不要在本地写入整个文件,然后最后上传文件.

I need to create a CSV and upload it to an S3 bucket. Since I'm creating the file on the fly, it would be better if I could write it directly to S3 bucket as it is being created rather than writing the whole file locally, and then uploading the file at the end.

有没有办法做到这一点?我的项目是使用Python编写的,并且对该语言还很陌生.这是我到目前为止尝试过的:

Is there a way to do this? My project is in Python and I'm fairly new to the language. Here is what I tried so far:

import csv
import csv
import io
import boto
from boto.s3.key import Key


conn = boto.connect_s3()
bucket = conn.get_bucket('dev-vs')
k = Key(bucket)
k.key = 'foo/foobar'

fieldnames = ['first_name', 'last_name']
writer = csv.DictWriter(io.StringIO(), fieldnames=fieldnames)
k.set_contents_from_stream(writer.writeheader())

我收到此错误:BotoClientError:s3不支持分块传输

I received this error: BotoClientError: s3 does not support chunked transfer

更新:我找到了一种直接写到S3的方法,但是在没有实际删除已经写过的行的情况下,我找不到清除缓冲区的方法.因此,例如:

conn = boto.connect_s3()
bucket = conn.get_bucket('dev-vs')
k = Key(bucket)
k.key = 'foo/foobar'

testDict = [{
    "fieldA": "8",
    "fieldB": None,
    "fieldC": "888888888888"},
    {
    "fieldA": "9",
    "fieldB": None,
    "fieldC": "99999999999"}]

f = io.StringIO()
fieldnames = ['fieldA', 'fieldB', 'fieldC']
writer = csv.DictWriter(f, fieldnames=fieldnames)
writer.writeheader()
k.set_contents_from_string(f.getvalue())

for row in testDict:
    writer.writerow(row)
    k.set_contents_from_string(f.getvalue())

f.close()

在文件中写入3行,但是我无法释放内存以写入大文件.如果我添加:

Writes 3 lines to the file, however I'm unable to release memory to write a big file. If I add:

f.seek(0)
f.truncate(0)

到循环,则仅写入文件的最后一行.有什么方法可以释放资源而不删除文件中的行?

to the loop, then only the last line of the file is written. Is there any way to release resources without deleting lines from the file?

推荐答案

我确实找到了我的问题的解决方案,如果其他人有兴趣,我将在此处发布.我决定将其作为分段上传的一部分进行.您无法流式传输到S3.还有一个软件包可以将您的流式文件更改为分段上传,我使用了它:智能打开.

I did find a solution to my question, which I will post here in case anyone else is interested. I decided to do this as parts in a multipart upload. You can't stream to S3. There is also a package available that changes your streaming file over to a multipart upload which I used: Smart Open.

import smart_open
import io
import csv

testDict = [{
    "fieldA": "8",
    "fieldB": None,
    "fieldC": "888888888888"},
    {
    "fieldA": "9",
    "fieldB": None,
    "fieldC": "99999999999"}]

fieldnames = ['fieldA', 'fieldB', 'fieldC']
f = io.StringIO()
with smart_open.smart_open('s3://dev-test/bar/foo.csv', 'wb') as fout:
    writer = csv.DictWriter(f, fieldnames=fieldnames)
    writer.writeheader()
    fout.write(f.getvalue())

    for row in testDict:
        f.seek(0)
        f.truncate(0)
        writer.writerow(row)
        fout.write(f.getvalue())

f.close()

这篇关于您可以使用流而不是本地文件上传到S3吗?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!

07-22 08:33
查看更多