问题描述
- 在内容制作将是一个亲用户,所以在他们的部分一点点额外的工作是不是一个巨大的负担。然而,保持它的简单,尽可能为他们(和我)是理想的。将是最好的,如果网络的形式可以用于启动
- 在世上本没有几百内容制作的,所以一些额外的时间和精力,可以致力于建立某种形式的帐户或过程的每一个内容制作。虽然自动化为王。
- 有人说,你可以使用Silverlight某种Java小程序或可能。
- 有一件事我认为是使用SFTP先上传到EC2然后将它转移到S3以后的。但是,那种听起来像一个痛苦使得它的安全。
- 在经过一番研究,我发现S3允许跨域资源共享。因此,这可能允许直接上传到S3。然而,如何稳定会变成这样巨大的文件?
- 如何直接从您的客户端的Web应用 上传文件到Amazon S3
- 直接上传到S3(与jQuery的一点帮助)
- The content producer will be a pro user, so a little extra work on their part is not a huge burden. However, keeping it as simple as possible for them (and me) is ideal. Would be best if a web form could be used to initiate.
- There wouldn't be many hundreds of content producers, so some extra time or effort could be devoted to setting up some sort of account or process for each individual content producer. Although automation is king.
- Some said you could use some sort of Java Applet or maybe Silverlight.
- One thing I thought of was using SFTP to upload first to EC2 then it would be moved to S3 afterwards. But it kind of sounds like a pain making it secure.
- After some research I discovered S3 allows cross-origin resource sharing. So this could allow uploading directly to S3. However, how stable would this be with huge files?
- How to directly upload files to Amazon S3 from your client side web app
- Direct Upload to S3 (with a little help from jQuery)
任何想法?
推荐答案
您可以实现前端的pretty的任何东西,你可以code说话本地S3的多部分上传...这是方法我建议,因为稳定,对于这一点,。
You could implement the front-end in pretty much anything that you can code to speak native S3 multipart upload... which is the approach I'd recommend for this, because of stability.
通过一个多载,你(意为开发者,而不是最终用户,我会建议)选择一个零件尺寸,每件最低5MB,并且该文件可以无大的10,000部件,每个正好相同的大小(一个你中的上载的开始,除了最后部分,这将是然而,许多字节遗留在端...因此,上传文件的ultimatel最大大小取决于部分尺寸的选择。
With a multipart upload, "you" (meaning the developer, not the end user, I would suggest) choose a part size, minimum 5MB per part, and the file can be no larger that 10,000 "parts", each exactly the same size (the one "you" selected at the beginning of the upload, except for the last part, which would be however many bytes are left over at the end... so the ultimatel maximum size of the uploaded file depends on the part-size you choose.
一部分的大小实质上已成为可重新启动/重试块大小(赢了!)......所以你的前端设计可无限重新发送失败的部分,直到它通过正确。份甚至不必为了被上传,可以并行上传,并且如果上载的相同部分一次以上,将更新的一个替代旧的,并与每个块,S3返回您比较校验在您的本地计算的。直到你完成上传的对象不会成为S3可见。当您完成上传,如果S3还没有得到所有部件(这是应该的,因为他们是当他们上载的所有确认)则FINALIZE调用会失败。
The size of a "part" essentially becomes your restartable/retryable block size (win!)... so your front-end implementation can infinitely resend a failed part until it goes through correctly. Parts don't even have to be uploaded in order, they can be uploaded in parallel, and if you upload the same part more than once, the newer one replaces the older one, and with each block, S3 returns a checksum that you compare to your locally calculated one. The object doesn't become visible in S3 until you finalize the upload. When you finalize the upload, if S3 hasn't got all the parts (which is should, because they were all acknowledged when they uploaded) then the finalize call will fail.
有一件事你必须记住,虽然是多部分上传明显的永远的时间了,如果他们是从不要么完成/完成,也没有主动中止客户端工具,你将支付未完成上传的上传块的存储。所以,要实现定期调用 ListMultipartUploads 识别和中止那些不管出于什么原因都没有完成或取消上传,并中止它们。
The one thing you do have to keep in mind, though, is that multipart uploads apparently never time out, and if they are "never" either finalized/completed nor actively aborted by the client utility, you will pay for the storage of the uploaded blocks of the incomplete uploads. So, you want to implement an automated back-end process that periodically calls ListMultipartUploads to identify and abort those uploads that for whatever reason were never finished or canceled, and abort them.
我不知道如何有帮助的,这是作为一个答案,你的整体问题,但开发自定义的前端工具,不应该是一个复杂的事情 - 在S3 API是非常简单的。我可以这样说,因为我开发了一个工具来做到这一点(我的内部使用 - 这不是一个产品插件)。我可能有一天它发布为开放源代码,但它很可能不会满足您的需求呢 - 它基本上是一个命令行实用程序,可用于自动/已计划的进程流(管子)程序的输出直接进入S3中为一系列的多部分份(文件是大的,所以我的默认部分大小是64MB),而当输入流是由程序生成的输出关闭时,它检测到这一点,并最终完成上载。 :)我用它来传输实时数据库备份,通过一个COM pression程序,直接进入S3,因为它们产生的,甚至无需这些海量文件的任何地方存在于任何硬盘驱动器。
I don't know how helpful this is as an answer to your overall question, but developing a custom front-end tool should not be a complicated matter -- the S3 API is very straightforward. I can say this, because I developed a utility to do this (for my internal use -- this isn't a product plug). I may one day release it as open source, but it likely wouldn't suit your needs anyway -- its essentially a command-line utility that can be used by automated/scheduled processes to stream ("pipe") the output of a program directly into S3 as a series of multipart parts (the files are large, so my default part-size is 64MB), and when the input stream is closed by the program generating the output, it detects this and finalizes the upload. :) I use it to stream live database backups, passed through a compression program, directly into S3 as they are generated, without ever needing those massive files to exist anywhere on any hard drive.
您的愿望,为您的客户提供流畅的体验,在我看来,高度赞扬S3多部分的作用,如果你知道如何code中的什么的,可以产生一个桌面或基于浏览器的用户界面,可以读取本地桌面文件系统,并且还具有对HTTP和SHA / HMAC库,那么你可以写一个客户端要做到这一点,看上去感觉正是你需要它的方式。
Your desire to have a smooth experience for your clients, in my opinion, highly commends S3 multipart for the role, and if you know how to code in anything that can generate a desktop or browser-based UI, can read local desktop filesystems, and has libraries for HTTP and SHA/HMAC, then you can write a client to do this that looks and feels exactly the way you need it to.
您将不需要在AWS手动设置任何为每个客户端,只要你有一个后端系统,验证客户端工具给你,也许是一个用户名和密码发送通过SSL连接到在Web服务器上的应用程序,然后向客户端提供实用工具,客户端实用程序可以用来做上传自动生成的临时AWS凭据。
You wouldn't need to set up anything manually in AWS for each client, so long as you have a back-end system that authenticates the client utility to you, perhaps by a username and password sent over an SSL connection to an application on a web server, and then provides the client utility with automatically-generated temporary AWS credentials that the client utility can use to do the uploading.
这篇关于上传大型高清视频文件,以亚马逊网络服务S3的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!