问题描述
我们最近对生产系统的观察告诉我们Java容器的驻留内存使用量增长了。关于这个问题,我们已经做了一些调查,以了解为什么java进程使用像pmap这样的本机工具消耗比Heap + Thread Stacks + Shared Objects + Code Cache +等更多的内存。因此,我们发现了一些由本机进程分配的64M内存块(成对)(可能使用malloc / mmap):
Our recent observation on our production system, tells us the resident memory usage of our Java container grows up. Regarding to this problem, we have made some investigations to understand, why java process consumes much more memory than Heap + Thread Stacks + Shared Objects + Code Cache + etc, using some native tools like pmap. As a result of this, we found some 64M memory blocks (in pairs) allocated by native process (probably with malloc/mmap) :
0000000000400000 4K r-x-- /usr/java/jdk1.7.0_17/bin/java
0000000000600000 4K rw--- /usr/java/jdk1.7.0_17/bin/java
0000000001d39000 4108K rw--- [ anon ]
0000000710000000 96000K rw--- [ anon ]
0000000715dc0000 39104K ----- [ anon ]
00000007183f0000 127040K rw--- [ anon ]
0000000720000000 3670016K rw--- [ anon ]
00007fe930000000 62876K rw--- [ anon ]
00007fe933d67000 2660K ----- [ anon ]
00007fe934000000 20232K rw--- [ anon ]
00007fe9353c2000 45304K ----- [ anon ]
00007fe938000000 65512K rw--- [ anon ]
00007fe93bffa000 24K ----- [ anon ]
00007fe940000000 65504K rw--- [ anon ]
00007fe943ff8000 32K ----- [ anon ]
00007fe948000000 61852K rw--- [ anon ]
00007fe94bc67000 3684K ----- [ anon ]
00007fe950000000 64428K rw--- [ anon ]
00007fe953eeb000 1108K ----- [ anon ]
00007fe958000000 42748K rw--- [ anon ]
00007fe95a9bf000 22788K ----- [ anon ]
00007fe960000000 8080K rw--- [ anon ]
00007fe9607e4000 57456K ----- [ anon ]
00007fe968000000 65536K rw--- [ anon ]
00007fe970000000 22388K rw--- [ anon ]
00007fe9715dd000 43148K ----- [ anon ]
00007fe978000000 60972K rw--- [ anon ]
00007fe97bb8b000 4564K ----- [ anon ]
00007fe980000000 65528K rw--- [ anon ]
00007fe983ffe000 8K ----- [ anon ]
00007fe988000000 14080K rw--- [ anon ]
00007fe988dc0000 51456K ----- [ anon ]
00007fe98c000000 12076K rw--- [ anon ]
00007fe98cbcb000 53460K ----- [ anon ]
我用0000000720000000解释该行3670016K指的是堆空间,我们使用JVM参数-Xmx定义其大小。在那之后,对开始,其中总和是64M。
我们正在使用CentOS版本5.10(最终版)64位arch和JDK 1.7.0_17。
I interpret the line with 0000000720000000 3670016K refers to the heap space, of which size we define using JVM parameter "-Xmx". Right after that, the pairs begin, of which sum is 64M exactly.We are using CentOS release 5.10 (Final) 64-bit arch and JDK 1.7.0_17 .
问题是,这些块是什么?哪个子系统分配这些?
The question is, what are those blocks? Which subsystem does allocate these?
更新:我们不使用JIT和/或JNI本机代码调用。
Update: We do not use JIT and/or JNI native code invocations.
推荐答案
我遇到了同样的问题。这是一个已知问题,glibc> = 2.10
I ran in to the same problem. This is a known problem with glibc >= 2.10
治愈方法是设置此env变量
export MALLOC_ARENA_MAX = 4
The cure is to set this env variableexport MALLOC_ARENA_MAX=4
关于设置MALLOC_ARENA_MAX的IBM文章
IBM article about setting MALLOC_ARENA_MAXhttps://www.ibm.com/developerworks/community/blogs/kevgrig/entry/linux_glibc_2_10_rhel_6_malloc_may_show_excessive_virtual_memory_usage?lang=en
Google for MALLOC_ARENA_MAX或在SO上搜索它以查找大量参考资料。
Google for MALLOC_ARENA_MAX or search for it on SO to find a lot of references.
你可能会想要调整其他malloc选项来优化分配内存的低碎片:
You might want to tune also other malloc options to optimize for low fragmentation of allocated memory:
# tune glibc memory allocation, optimize for low fragmentation
# limit the number of arenas
export MALLOC_ARENA_MAX=2
# disable dynamic mmap threshold, see M_MMAP_THRESHOLD in "man mallopt"
export MALLOC_MMAP_THRESHOLD_=131072
export MALLOC_TRIM_THRESHOLD_=131072
export MALLOC_TOP_PAD_=131072
export MALLOC_MMAP_MAX_=65536
这篇关于Java Process的驻留内存使用量(RSS)不断增长的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!