本文介绍了FIELDDATA数据太大的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我打开kibana并进行搜索,我得到了碎片失败的错误。我查看了elasticsearch.log文件,我看到这个错误:

I open kibana and do a search and i get the error where shards failed. I looked in the elasticsearch.log file and I saw this error:

org.elasticsearch.common.breaker.CircuitBreakingException: [FIELDDATA] Data too large, data for [@timestamp] would be larger than limit of [622775500/593.9mb]



Is there any way to increase that limit of 593.9mb?

推荐答案

您可以尝试增加在 elasticsearch.yml 中限制为75%(默认值为60% code>配置文件并重新启动您的集群:

You can try to increase the fielddata circuit breaker limit to 75% (default is 60%) in your elasticsearch.yml config file and restart your cluster:

indices.breaker.fielddata.limit: 75%


curl -XPUT localhost:9200/_cluster/settings -d '{
  "persistent" : {
    "indices.breaker.fielddata.limit" : "40%"
  }
}'

尝试。

这篇关于FIELDDATA数据太大的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!

06-28 07:36