本文介绍了JVM内存使用失控的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有一个Tomcat webapp,代表客户执行一些漂亮的内存和CPU密集型任务。这是正常的,是所需的功能。但是,当我运行Tomcat时,内存使用量会随着时间的推移而猛增至4.0GB以上,此时我通常会杀死该进程,因为它会破坏我在开发计算机上运行的所有其他内容:





我以为我无意中用我的代码引入了内存泄漏,但在用VisualVM检查后,我看到了另一个故事:





VisualVM显示堆占用大约1 GB的RAM,这就是我设置它与 CATALINA_OPTS = - Xms256m -Xmx1024



为什么我的系统根据VisualVM的说法,这个过程占据了大量的内存,几乎没有占用任何内存?






之后进一步嗅探,我注意到如果多个工作正在运行同时在应用程序中,内存不会被释放。但是,如果我等待每个作业完成,然后再将 BlockingQueue 提交给 ExecutorService 服务,那么内存就是有效回收。我该怎么调试呢?为什么垃圾收集/内存重用会有所不同?

解决方案

你无法控制你想控制的东西 - Xmx 仅控制Java堆,它不控制JVM对本机内存的消耗,根据实现消耗完全不同。 VisualVM只显示了Heap正在消耗的内容,它没有显示整个JVM作为OS进程显示的本机内存。您将不得不使用操作系统级别的工具来查看,并且它们将报告完全不同的数字,通常比VisualVM报告的任何数字都大得多,因为JVM以完全不同的方式使用本机内存。 / p>

来自以下文章



维护堆和垃圾收集器使用本机内存你可以无法控制。

和JIT编译器使用本机内存,就像 javac

然后你有了类加载器(s)使用本机内存

我甚至不会开始引用Threads部分,我想你明白了
-Xmx 不控制你认为它控制的是什么,它控制JVM堆,而不是
进入JVM堆的所有东西,以及堆为你的
管理和簿记指定了更多的本机内存。





这是,它并不像你在你的问题中的假设那样简单,它是非常值得全面阅读。



ThreadStack siz在许多实现中,e的最小限制因操作系统和JVM版本而异;如果将限制设置为低于JVM或OS的本机操作系统限制,则会忽略threadstack设置(有时必须设置ulix on * nix)。其他命令行选项以相同的方式工作,当提供的值太小时,默默地默认为更高的值。不要假设传入的所有值都代表实际使用的值。



类加载器和Tomcat有多个,占用大量内存而不是很容易记录。 JIT占用了大量的内存,随着时间的推移交易空间,这在大多数情况下是一个很好的交易。


I have a Tomcat webapp which does some pretty memory and CPU-intensive tasks on the behalf of clients. This is normal and is the desired functionality. However, when I run Tomcat, memory usage skyrockets over time to upwards of 4.0GB at which time I usually kill the process as it's messing with everything else running on my development machine:

I thought I had inadvertently introduced a memory leak with my code, but after checking into it with VisualVM, I'm seeing a different story:

VisualVM is showing the heap as taking up approximately a GB of RAM, which is what I set it to do with CATALINA_OPTS="-Xms256m -Xmx1024".

Why is my system seeing this process as taking up a ton of memory when according to VisualVM, it's taking up hardly any at all?


After a bit of further sniffing around, I'm noticing that if multiple jobs are running simultaneously in the applications, memory does not get freed. However, if I wait for each job to complete before submitting another to my BlockingQueue serviced by an ExecutorService, then memory is recycled effectively. How can I debug this? Why would garbage collection/memory reuse differ?

解决方案

You can't control what you want to control, -Xmx only controls the Java Heap, it doesn't control consumption of native memory by the JVM, which is consumed completely differently based on implementation. VisualVM is only showing you what the Heap is comsuming, it doesn't show what the entire JVM is consuming as native memory as an OS process. You will have to use OS level tools to see that, and they will report radically different numbers, usually much much larger than anything VisualVM reports, because the JVM uses up native memory in an entirely different way.

From the following article Thanks for the Memory ( Understanding How the JVM uses Native Memory on Windows and Linux )

Maintaining the heap and garbage collector use native memory you can't control.

and the JIT compiler uses native memory just like javac would

and then you have the classloader(s) which use native memory

I won't even start quoting the section on Threads, I think you get the idea that-Xmx doesn't control what you think it controls, it controls the JVM heap, not everythinggoes in the JVM heap, and the heap takes up way more native memory that what you specify formanagement and book keeping.

Plain and simple the JVM uses more memory than what is supplied in -Xms and -Xmx and the other command line parameters.

Here is a very detailed article on how the JVM allocates and manages memory, it isn't as simple as what you are expected based on your assumptions in your question, it is well worth a comprehensive read.

ThreadStack size in many implementations have minimum limits that vary by Operating System and sometimes JVM version; the threadstack setting is ignored if you set the limit below the native OS limit for the JVM or the OS ( ulimit on *nix has to be set instead sometimes ). Other command line options work the same way, silently defaulting to higher values when too small values are supplied. Don't assume that all the values passed in represent what are actually used.

The Classloaders, and Tomcat has more than one, eat up lots of memory that isn't documented easily. The JIT eats up a lot of memory, trading space for time, which is a good trade off most of the time.

这篇关于JVM内存使用失控的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!

08-27 07:36