问题描述
我最近有一个Linux进程,它泄漏"了文件描述符:它打开了文件描述符,但没有正确关闭其中的一些文件.
I recently had a Linux process which "leaked" file descriptors: It opened them and didn't properly close some of them.
如果我对此进行了监视,我可以提前告知-该过程已达到极限.
If I had monitored this, I could tell - in advance - that the process was reaching its limit.
在Ubuntu Linux系统中,有没有一种不错的Bash \ Python方法来检查给定进程的FD使用率?
Is there a nice, Bash\Python way to check the FD usage ratio for a given process in a Ubuntu Linux system?
我现在知道如何检查有多少打开的文件描述符;我只需要知道一个进程允许多少个文件描述符.某些系统(例如Amazon EC2)没有/proc/pid/limits
文件.
I now know how to check how many open file descriptors are there; I only need to know how many file descriptors are allowed for a process. Some systems (like Amazon EC2) don't have the /proc/pid/limits
file.
谢谢
Udi
推荐答案
计算/proc/<pid>/fd/
中的条目.可以在/proc/<pid>/limits
中找到适用于该过程的硬性限制和软性限制.
Count the entries in /proc/<pid>/fd/
. The hard and soft limits applying to the process can be found in /proc/<pid>/limits
.
这篇关于检查Linux中给定进程的开放FD限制的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!