问题描述
我在双节点( A
和 B
)群集上使用cassandra 0.6.5。
$
I used cassandra 0.6.5 on two-node (A
and B
) cluster.Hector
is used in Client side.
一个节点 A
运行一段时间后总是有太多打开文件异常
。
我运行 netstat
。
它显示了许多 CLOSE_WAIT
tcp连接。
One node A
always has too many open files exception
after running some time.
I run netstat
on the node.
It shows a lot of CLOSE_WAIT
tcp connections.
这是异常的罪魁祸首。
然而,什么原因导致这么多 CLOSE_WAIT
连接,
是客户端 Hector
有问题吗?
为什么其他节点 B
没有这个问题?
It is the culprit of the exception.
However, what causes so many CLOSE_WAIT
connections,
Is it Client side Hector
problem?
Why the other node B
does not have this problem?
推荐答案
不使用netstat,请尝试 lsof -n | grep java
。它列出了多少文件描述符(你可以得到一个计数 lsof -n | grep java | wc -l )?
Instead of using netstat, try lsof -n | grep java
. How many file descriptors are listed there (you can get a count with lsof -n | grep java | wc -l
)?
。您可以通过ulimit或在/etc/security/limits.conf中更改。 Datastax建议以下更改:
The datastax docs suggest you might be hitting a default file descriptor limit of 1024. You can change that via ulimit or in /etc/security/limits.conf. Datastax suggests the following changes:
echo "* soft nofile 32768" | sudo tee -a /etc/security/limits.conf
echo "* hard nofile 32768" | sudo tee -a /etc/security/limits.conf
echo "root soft nofile 32768" | sudo tee -a /etc/security/limits.conf
echo "root hard nofile 32768" | sudo tee -a /etc/security/limits.conf
debian包设置以下值: / p>
The debian package sets the following values:
# Provided by the cassandra package
cassandra - memlock unlimited
cassandra - nofile 100000
我也强烈建议您升级到更高版本的Cassandra。
I would also strongly recommend that you upgrade to a more recent version of Cassandra.
这篇关于cassandra太多打开文件的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!