从Heroku流式传输大文件在30秒超时后失败

从Heroku流式传输大文件在30秒超时后失败

本文介绍了从Heroku流式传输大文件在30秒超时后失败的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有一个python web worker,可以根据客户请求传输一个大文件。 30秒后,Heroku终止连接。我使用 web.py 并产生新的输出。根据Heroku文档:

我每55秒发送一个字节但连接仍然终止。



这些是我正在使用的标题

  web.header ('Content-type','application / zip')
web.header('Content-Disposition','attachment; filename =images.zip'')



我甚至尝试添加:

  web.header ('Transfer-Encoding','chunked')

我做错了什么?

解决方案

看起来问题在于 gunicorn 设置的错误。在 Procfile 中扩展 gunicron >超时了:

   - 超时300 


I have a python web worker that streams a large file upon client request. After 30 seconds the connection is terminated by Heroku. I'm using web.py and yielding new output. According to Heroku docs:

I send much more than 1 byte every 55 seconds but the connection is still terminated.

These are the headers I'm using

web.header('Content-type' , 'application/zip')
web.header('Content-Disposition', 'attachment; filename="images.zip"')

I even tried adding:

web.header('Transfer-Encoding','chunked')

Am I doing something wrong?

解决方案

It appears the problem was a result of bad gunicorn settings. Extending gunicron timeout in Procfile did the trick:

--timeout 300

这篇关于从Heroku流式传输大文件在30秒超时后失败的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!

09-01 21:53