问题描述
在下面的示例中,我有一些有关内存使用的相关问题.
I have a few related questions regarding memory usage in the following example.
-
如果我在解释器中运行,
If I run in the interpreter,
foo = ['bar' for _ in xrange(10000000)]
我的机器上使用的实际内存增加到80.9mb
.然后,我
the real memory used on my machine goes up to 80.9mb
. I then,
del foo
实际内存下降,但仅下降到30.4mb
.解释器使用4.4mb
基线,因此不向操作系统释放内存26mb
有什么好处?是因为Python在提前计划",以为您可能会再次使用那么多的内存吗?
real memory goes down, but only to 30.4mb
. The interpreter uses 4.4mb
baseline so what is the advantage in not releasing 26mb
of memory to the OS? Is it because Python is "planning ahead", thinking that you may use that much memory again?
为什么特别释放50.5mb
-释放量基于多少?
Why does it release 50.5mb
in particular - what is the amount that is released based on?
是否有一种方法可以强制Python释放所有已使用的内存(如果您知道不会再使用那么多的内存)?
Is there a way to force Python to release all the memory that was used (if you know you won't be using that much memory again)?
注意这个问题不同于如何在Python中显式释放内存?因为这个问题主要解决了内存使用量相对于基线的增加,即使解释器通过垃圾回收(无论是否使用gc.collect
)释放了对象之后.
推荐答案
像这样尝试一下,然后告诉我您得到了什么.这是 psutil.Process.memory_info 的链接.
Try it like this, and tell me what you get. Here's the link for psutil.Process.memory_info.
import os
import gc
import psutil
proc = psutil.Process(os.getpid())
gc.collect()
mem0 = proc.get_memory_info().rss
# create approx. 10**7 int objects and pointers
foo = ['abc' for x in range(10**7)]
mem1 = proc.get_memory_info().rss
# unreference, including x == 9999999
del foo, x
mem2 = proc.get_memory_info().rss
# collect() calls PyInt_ClearFreeList()
# or use ctypes: pythonapi.PyInt_ClearFreeList()
gc.collect()
mem3 = proc.get_memory_info().rss
pd = lambda x2, x1: 100.0 * (x2 - x1) / mem0
print "Allocation: %0.2f%%" % pd(mem1, mem0)
print "Unreference: %0.2f%%" % pd(mem2, mem1)
print "Collect: %0.2f%%" % pd(mem3, mem2)
print "Overall: %0.2f%%" % pd(mem3, mem0)
输出:
Allocation: 3034.36%
Unreference: -752.39%
Collect: -2279.74%
Overall: 2.23%
我切换为相对于进程VM大小进行测量,以消除系统中其他进程的影响.
I switched to measuring relative to the process VM size to eliminate the effects of other processes in the system.
当顶部的连续可用空间达到恒定,动态或可配置的阈值时,C运行时(例如glibc,msvcrt)会缩小堆.使用glibc,您可以使用 mallopt
(M_TRIM_THRESHOLD)进行调整.鉴于此,堆收缩比您free
的块收缩更多-甚至更多-也就不足为奇了.
The C runtime (e.g. glibc, msvcrt) shrinks the heap when contiguous free space at the top reaches a constant, dynamic, or configurable threshold. With glibc you can tune this with mallopt
(M_TRIM_THRESHOLD). Given this, it isn't surprising if the heap shrinks by more -- even a lot more -- than the block that you free
.
在3.x中,range
不会创建列表,因此上面的测试不会创建1000万个int
对象.即使这样做,3.x中的int
类型也基本上是2.x long
,它没有实现自由列表.
In 3.x range
doesn't create a list, so the test above won't create 10 million int
objects. Even if it did, the int
type in 3.x is basically a 2.x long
, which doesn't implement a freelist.
这篇关于在Python中释放内存的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!