问题描述
我有一个包含2个DC的群集,每个DC包含2个节点。
I got a cluster with 2 DCs and each DC contain 2 nodes.
DC1:
192.168.60.81
192.168.60.82
DC2:
192.168.60.242
192.168.60.247
情况1:
最初所有节点都启动时我尝试获取 cluster.metadata.allHosts
,它向我提供有关主机状态为UP的信息。
Initially when all the nodes are up and when I try to get cluster.metadata.allHosts
, it gives me information about the state of the host as UP.
情况2:
当本地数据中心(DC1)中的任何节点上升或下降时, cluster.metadata.allHosts
为我提供了正确的主机状态信息。
When any of the nodes in local datacenter (DC1) goes up or down, the cluster.metadata.allHosts
gives me the correct host state information.
问题:
当远程数据中心(DC2)中的任何节点发生故障时, cluster.metadata.allHosts
会为我提供主机状态信息正确向下。但是当同一节点重新启动时,仍然 cluster.metadata.allHosts
将主机状态信息显示为DOWN。
When any of the nodes in remote datacenter (DC2) goes down, the cluster.metadata.allHosts
gives me the host state information as DOWN correctly. But the when the same node comes back up, still cluster.metadata.allHosts
gives me the host state information as DOWN.
我在Host.StateListener中注册,以查看是否为远程DC中的节点触发了事件。但不幸的是,当远程DC中的节点恢复时,它也从不通知。
I registered with Host.StateListener to see if the events are fired for a node in remote DC. But unfortunately, when a node in remote DC comes back up it never notifies too.
任何帮助都将受到赞赏。
Any help would be appreciated.
推荐答案
我在邮件论坛中从Datastax的Andrew Tolbert那里得到了这个答案:
说明:
上面的答案非常有意义,因为我的负载均衡策略是DCAwareRoundRobinPolicy,其中0 usedRemoteHostsPerRemoteDC。因此,驱动程序没有尝试重新连接到远程DC中的节点。因此,远程节点的状态与驱动程序的视图不一致。
The above answer makes perfect sense as my load balancing policy was DCAwareRoundRobinPolicy with 0 usedRemoteHostsPerRemoteDC. Hence the driver was not trying to reconnect to nodes in remote DC. Hence the state of the remote nodes were not consistent from driver's view.
这篇关于Datastax cassandra驱动程序提供有关主机状态的错误元数据信息的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!