问题描述
我已经开发了通过使用MySQL数据生成大文件(700MB +)的应用程序,然后将它们提供给用户。
我现在迁移应用程序来的Heroku。
为了将文件上传到Amazon S3,该文件必须在文件系统中生成的第一或可上传的因为Heroku的也不能保证你的文件将在那里(DYNO可能会重新启动或失败,无论出于何种原因)。
文件都将是pretty的大所以多部分上传会使用(我不知道,如果字符串上传可以部分完成)。
我不知道我的计划是要正常工作,或者是否有这样做的更好的方法。什么不顺心的事,并要求在测功不成?
如何,我认为它应该工作:假设该应用已经从数据库开始取数据,产生一个字符串5MB并将其发送到AWS,并通过数据集循环,直到完整的文件被发送。
如果别人的经验是任何迹象显示,答案是没了。在5月20日,2013年这一职位,开发者记录他的经验与Heroku的与AWS:我为什么离开Heroku的,和我的新AWS设置注意事项 http://www.holovaty.com/writing/aws-notes/
我会建议使用Amazon EC2的预留实例。要开始,你可以买一个二手的预留实例的预订的(第三方)。我最近买了三个预约$ 20倒15岁以下每月$加认购的租约(两年),在三个区域中运行它们的其余部分,我不能有任何快乐。
亚马逊可以更快地写入到S3比其他任何人,因此,如果这是一个问题,EC2有优势就在这里。此外,亚马逊不会让升级你这样会破坏兼容性的Heroku已经知道的方式。我希望这有助于。
I have developed an application that generates large files (700mb+) by using data from MySQL and then serves them to users.
I am now migrating the app to Heroku.
In order to upload a file to Amazon S3, that file has to be generated in the filesystem first or can be uploaded as a string since Heroku can't guarantee your file will be there (dyno might restart or fail for whatever reason).
Files are going to be pretty big so multipart upload will be used (I am not sure if string uploading can be done in parts).
I don't know if my plan is going to work correctly, or if there is a better way of doing this. What is something goes wrong and the dyno fails during the request?
How I think it should work:Let's assume that the app has started fetching data from database, generates a 5MB string and sends it to AWS, and loops through the dataset until the complete file is sent.
If the experience of others is any indication, the answer is nope. In this post dated May 20, 2013, a developer documented his experience with Heroku versus AWS:"Why I left Heroku, and notes on my new AWS setup"http://www.holovaty.com/writing/aws-notes/
I would suggest using an Amazon EC2 reserved instance.To get started, you can buy a second-hand reserved instance reservation ("Third Party"). I recently bought three reservations for $20 down plus subscription of under $15 per month for the remainder of the tenancy (two years) to run them in three regions, and I couldn't be any happier.
Amazon can write faster to S3 than anyone else, so if this is an issue, EC2 has an advantage right there. Plus, Amazon will not make upgrades for you that would break compatibility the way Heroku has been known to. I hope this helps.
这篇关于Heroku的+临时文件系统+ AWS S3的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!