问题描述
我曾经是一个快乐的 s3cmd 用户.但是最近当我尝试将一个大的 zip 文件 (~7Gig) 传输到 Amazon S3 时,我收到了这个错误:
I used to be a happy s3cmd user. However recently when I try to transfer a large zip file (~7Gig) to Amazon S3, I am getting this error:
$> s3cmd put thefile.tgz s3://thebucket/thefile.tgz
....
20480 of 7563176329 0% in 1s 14.97 kB/s failed
WARNING: Upload failed: /thefile.tgz ([Errno 32] Broken pipe)
WARNING: Retrying on lower speed (throttle=1.25)
WARNING: Waiting 15 sec...
thefile.tgz -> s3://thebucket/thefile.tgz [1 of 1]
8192 of 7563176329 0% in 1s 5.57 kB/s failed
ERROR: Upload of 'thefile.tgz' failed too many times. Skipping that file.
我在 Ubuntu 上使用最新的 s3cmd.
I am using the latest s3cmd on Ubuntu.
为什么会这样?我该如何解决?如果无法解决,我可以使用什么替代工具?
Why is it so? and how can I solve it? If it is unresolvable, what alternative tool can I use?
推荐答案
在我的情况下,失败的原因是服务器的时间比 S3 时间提前.因为我在我的服务器(位于美国东部)中使用了 GMT+4 并且我使用了亚马逊的美国东部存储设施.
In my case the reason of the failure was the server's time being ahead of the S3 time. Since I used GMT+4 in my server (located in US East) and I was using Amazon's US East storage facility.
将我的服务器调整为美国东部时间后,问题就解决了.
After adjusting my server to the US East time, the problem was gone.
这篇关于s3cmd 失败太多次的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!