我将12个新数据节点添加到8个数据节点的现有集群中。我正在尝试按照建议使用“排除分配”关闭先前的8个节点



它没有重定位任何碎片,所以我运行带有“explain”选项的reroute命令。有人可以解释以下文字在说什么吗?

>  "explanations" : [ {
>     "command" : "move",
>     "parameters" : {
>       "index" : "2015-09-20",
>       "shard" : 0,
>       "from_node" : "_dDn1SmqSquhMGgjti6vGg",
>       "to_node" : "OQBFMt17RaWboOzMnUy2jA"
>     },
>     "decisions" : [ {
>       "decider" : "same_shard",
>       "decision" : "YES",
>       "explanation" : "shard is not allocated to same node or host"
>     }, {
>       "decider" : "filter",
>       "decision" : "YES",
>       "explanation" : "node passes include/exclude/require filters"
>     }, {
>       "decider" : "replica_after_primary_active",
>       "decision" : "YES",
>       "explanation" : "shard is primary"
>     }, {
>       "decider" : "throttling",
>       "decision" : "YES",
>       "explanation" : "below shard recovery limit of [16]"
>     }, {
>       "decider" : "enable",
>       "decision" : "YES",
>       "explanation" : "allocation disabling is ignored"
>     }, {
>       "decider" : "disable",
>       "decision" : "YES",
>       "explanation" : "allocation disabling is ignored"
>     }, {
>       "decider" : "awareness",
>       "decision" : "NO",
>       "explanation" : "too many shards on nodes for attribute: [dc]"  }, {
>       "decider" : "shards_limit",
>       "decision" : "YES",
>       "explanation" : "total shard limit disabled: [-1] <= 0"
>     }, {
>       "decider" : "node_version",
>       "decision" : "YES",
>       "explanation" : "target node version [1.4.5] is same or newer than source node version [1.4.5]"
>     }, {
>       "decider" : "disk_threshold",
>       "decision" : "YES",
>       "explanation" : "enough disk for shard on node, free: [1.4tb]"
>     }, {
>       "decider" : "snapshot_in_progress",
>       "decision" : "YES", "explanation" : "no snapshots are currently running"
>

最佳答案

如果您有副本,则可以简单地一个接一个地关闭节点,然后等待每个节点使集群再次变为绿色。

在这种情况下,您无需显式重新路由。

就是说,在您的日志中,听起来好像您正在awareness文件中使用elasticsearch.yml。您应该检查您的设置。

09-28 07:45