5中作为文件写入Google

5中作为文件写入Google

本文介绍了在Python 2.5中作为文件写入Google App Engine blobstore的正确方法是什么?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

当我尝试简单写入Google App Engine blobstore时,我目前超出了软内存限制。写这段代码的正确方法是什么,以防止泄漏内存?

  from __future__ import with_statement 
from google.appengine.api汇入档案
from google.appengine.api导入blobstore
def files_test(限制):
file_name = files.blobstore.create(mime_type ='application / octet-stream')
尝试:
with files.open(file_name ,'a')作为f:
在范围(限制)中的x:
f.write(Testing \\\


finally:
files .finalize(file_name)
返回files.blobstore.get_blob_key(file_name)

files_test(4000 )会产生错误:

在服务27个请求之后超出157.578 MB的软私有内存限制

解决方案

不幸的是python的垃圾收集器并不完美。你做的每一个写操作都会创建大量的小对象(通过协议缓冲区创建),这些小对象由于某种原因而不能被Python正在收集。我发现在mapreduce库中我必须做
$ b $ pre $ $ $ c $ import gc
gc.collect

不时让垃圾收集器保持高兴。


I am currently exceeding the soft memory limit when I try to do simple writes to the Google App Engine blobstore. What is the proper way to write this code so that it does not leak memory?

from __future__ import with_statement
from google.appengine.api import files
from google.appengine.api import blobstore
def files_test(limit):
file_name = files.blobstore.create(mime_type='application/octet-stream')
   try:
     with files.open(file_name, 'a') as f:
       for x in range(limit):
         f.write("Testing \n")

   finally:
     files.finalize(file_name)
     return files.blobstore.get_blob_key(file_name)

files_test(4000) produces the error:

Exceeded soft private memory limit with 157.578 MB after servicing 27 requests total

解决方案

Unfortunately python's garbage collector is not perfect. Every write you do creates lots of small objects (via protocol buffer creation) that is not collected by python on the fly for some reason. I found that in mapreduce library I have to do

import gc
gc.collect()

from time to time to keep garbage collector happy.

这篇关于在Python 2.5中作为文件写入Google App Engine blobstore的正确方法是什么?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!

08-02 01:33