问题描述
我们面临着Java进程的驻留内存逐渐增长的问题。我们有Xmx定义在4096 MB和XX:MaxPermSize = 1536m。活动线程的数量〜1500,定义了XS为256K。
当应用程序服务器(JBoss 6.1)启动所使用的驻留内存是〜5.6GB(一直使用top命令来监视它);它逐渐增长(每天约0.3到0.5 Gb),直到它增长到〜7.4 Gb,当内核的OOM杀手杀死了由于RAM空间不足(该服务器有9GB的RAM)的进程。
我们一直定期监视线程转储 - 没有线程泄漏的嫌疑。我们仍然无法确定这个额外内存来自哪里。
Pmap输出显示了一些Anon块(除了堆栈和堆的常规块) ,大多数在64Mb的场中,这在堆的存储器使用方面未被考虑,在堆转储中,我们还试图寻找DirectByteBuffers和sun.misc.Unsafe对象,这些对象通常用于非堆内存分配,但是数字的对象以及内存容量看似标称。是否有可能仍然有未释放的本机内存,即使这些对象是GCed后?任何其他类可能导致使用非堆内存?
我们的应用程序本身有自己的调用,但有可能是一些第三方库。
有什么想法可能导致这种情况?任何其他细节/工具,可以进一步帮助调试这样的增加?任何已知的问题,我们应该注意?平台:Jboss 6.1运行在Centos 5.6上。
RSS使用率的增加可能是由本机内存泄漏引起的。
一个常见的问题是由于未关闭 ZipInputStream
/ GZIPInputStream
而导致的本机内存泄漏。
打开 ZipInputStream
的典型方法是调用 Class.getResource
/ 在
上调用
实例或通过调用 openConnection()。getInputStream()
java.net.URL Class.getResourceAsStream
/ ClassLoader.getResourceAsStream
。必须确保这些流始终处于关闭状态。
您可以使用通过指定 MALLOC_CONF
环境变量中的设置来启用malloc抽样概要分析来调试本机内存泄漏。本博文提供了详细说明:。 还包含有关使用jemalloc调试Java应用程序中的本机内存泄漏的信息。
同一个博客还包含有关另一个。
We are facing an issue where the Resident memory of the Java process grows gradually. We have Xmx defined at 4096 MB and XX:MaxPermSize=1536m. The number of active threads ~1500 with an Xss of 256K defined.
When the application server(JBoss 6.1) starts the resident memory used is ~5.6GB(have been using top command to monitor it); it gradually grows(around 0.3 to 0.5 Gb per day) till it grows to ~7.4 Gb, when the kernel's OOM killer kills the process due to shortage of RAM space(The server has 9GB of RAM).
We have been regularly monitoring the thread dump- no suspect of a thread leak. We are still unable to figure out from where this extra memory is coming from.
Pmap output shows a number of Anon blocks(apart from the regular blocks for stack and heap), mostly in arenas of 64 Mb, which are un-accounted for in terms of memory usage by heap, perm gen & stacks.
In the heap dump we have also tried looking for DirectByteBuffers and sun.misc.Unsafe objects, which are generally used for non-heap memory allocation, but the number of objects as well as the memory capacity seem nominal. Is it possible that there can still be un- freed native memory even after these objects are GCed? Any other classes that may result in using up non-heap memory?
Our application does have native calls on its own, but it's possible that some third party libs have them.
Any ideas on what could be causing this? Anything other detail/tool that could further help debugging such an increase? Any known issue that we should look out for? Platform : Jboss 6.1 running on Centos 5.6.
解决方案 The increase in RSS usage might be caused by a native memory leak.A common problem is native memory leaks caused by not closing a ZipInputStream
/GZIPInputStream
.
A typical way that a ZipInputStream
is opened is by a call to Class.getResource
/ClassLoader.getResource
and calling openConnection().getInputStream()
on the java.net.URL
instance or by calling Class.getResourceAsStream
/ClassLoader.getResourceAsStream
. One must ensure that these streams always get closed.
You can use jemalloc to debug native memory leaks by enabling malloc sampling profiling by specifying the settings in MALLOC_CONF
environment variable. Detailed instructions are available in this blog post: http://www.evanjones.ca/java-native-leak-bug.html . This blog post also has information about using jemalloc to debug a native memory leak in java applications.
The same blog also contains information about another native memory leak related to ByteBuffers.
这篇关于Jboss(Java)进程的内存使用量的逐渐增加的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!