问题描述
我有一个cherrypy web 服务器,它需要能够通过http post 接收大文件.我目前有一些工作,但是一旦发送的文件太大(大约 200mb),它就会失败.我正在使用 curl 发送测试发布请求,当我尝试发送一个太大的文件时,curl 吐出随请求发送的实体超出了允许的最大字节数".找了一圈,这好像是cherrypy的一个错误.
I have a cherrypy web server that needs to be able to receive large files over http post. I have something working at the moment, but it fails once the files being sent gets too big (around 200mb). I'm using curl to send test post requests, and when I try to send a file that's too big, curl spits out "The entity sent with the request exceeds the maximum allowed bytes." Searching around, this seems to be an error from cherrypy.
所以我猜要发送的文件需要分块发送?我用 mmap 尝试了一些东西,但我无法让它太工作.处理文件上传的方法是否也需要能够接受分块的数据?
So I'm guessing that the file being sent needs to be sent in chunks? I tried something with mmap, but I couldn't get it too work. Does the method that handles the file upload need to be able to accept the data in chunks too?
推荐答案
我拿了 DirectToDiskFileUpload代码>
作为起点.它为处理大上传所做的更改是:
I took DirectToDiskFileUpload
as a starting point. The changes it makes to handle big uploads are:
server.max_request_body_size
到0
(默认 100MB),server.socket_timeout
到60
(默认 10s),response.timeout
到3600
(默认 300s),- 使用
tempfile.NamedTemporaryFile
避免双重复制.
server.max_request_body_size
to0
(default 100MB),server.socket_timeout
to60
(default 10s),response.timeout
to3600
(default 300s),- Avoiding double copy by using
tempfile.NamedTemporaryFile
.
还采取了一些无用的操作来避免将上传保存在内存中,这会禁用标准的 CherryPy 主体处理并手动使用 cgi.FieldStorage
.没用,因为有cherrypy._cpreqbody.Part.maxrambytes
.
There are also some useless actions taken to supposedly avoid holding upload in memory, which disable standard CherryPy body processing and use cgi.FieldStorage
manually instead. It is useless because there is cherrypy._cpreqbody.Part.maxrambytes
.
在该点之后 Part
将存储其数据的字节阈值在文件而不是字符串中.默认为 1000,就像 cgi
模块中的Python 的标准库.
我试验了以下代码(由 Python 2.7.4、CherryPy 3.6 运行)和 1.4GB 文件.内存使用量(在 gnome-system-monitor 中)从未达到 10MiB.根据实际写入磁盘的字节数,cat/proc/PID/io
的write_bytes
几乎是文件的大小.使用标准的 cherrypy._cpreqbody.Part
和 shutil.copyfileobj
显然翻了一番.
I've experimented with the following code (run by Python 2.7.4, CherryPy 3.6) and 1.4GB file. Memory usage (in gnome-system-monitor) never reached out 10MiB. According to the number of bytes actually written to the disk, cat /proc/PID/io
's write_bytes
is almost the size of the file. With standard cherrypy._cpreqbody.Part
and shutil.copyfileobj
it is obviously doubled.
#!/usr/bin/env python
# -*- coding: utf-8 -*-
import os
import tempfile
import cherrypy
config = {
'global' : {
'server.socket_host' : '127.0.0.1',
'server.socket_port' : 8080,
'server.thread_pool' : 8,
# remove any limit on the request body size; cherrypy's default is 100MB
'server.max_request_body_size' : 0,
# increase server socket timeout to 60s; cherrypy's defult is 10s
'server.socket_timeout' : 60
}
}
class NamedPart(cherrypy._cpreqbody.Part):
def make_file(self):
return tempfile.NamedTemporaryFile()
cherrypy._cpreqbody.Entity.part_class = NamedPart
class App:
@cherrypy.expose
def index(self):
return '''<!DOCTYPE html>
<html>
<body>
<form action='upload' method='post' enctype='multipart/form-data'>
File: <input type='file' name='videoFile'/> <br/>
<input type='submit' value='Upload'/>
</form>
</body>
</html>
'''
@cherrypy.config(**{'response.timeout': 3600}) # default is 300s
@cherrypy.expose()
def upload(self, videoFile):
assert isinstance(videoFile, cherrypy._cpreqbody.Part)
destination = os.path.join('/home/user/', videoFile.filename)
# Note that original link will be deleted by tempfile.NamedTemporaryFile
os.link(videoFile.file.name, destination)
# Double copy with standard ``cherrypy._cpreqbody.Part``
#import shutil
#with open(destination, 'wb') as f:
# shutil.copyfileobj(videoFile.file, f)
return 'Okay'
if __name__ == '__main__':
cherrypy.quickstart(App(), '/', config)
这篇关于Python:使用cherrypy通过POST发送和接收大文件的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!