我正在尝试对关键字类型字段进行不区分大小写的聚合,但是在使它起作用时遇到了问题。
到目前为止,我已经尝试添加一个名为“lowercase”的自定义分析器,该分析器使用“keyword” token 生成器和“lowercase”过滤器。然后,我为要使用的字段在映射中添加了一个名为“use_lowercase”的字段。我也想保留现有的“文本”和“关键字”字段组件,因为我可能想在字段中搜索术语。
这是索引定义,包括定制分析器:
PUT authors
{
"settings": {
"analysis": {
"analyzer": {
"lowercase": {
"type": "custom",
"tokenizer": "keyword",
"filter": "lowercase"
}
}
}
},
"mappings": {
"famousbooks": {
"properties": {
"Author": {
"type": "text",
"fields": {
"keyword": {
"type": "keyword",
"ignore_above": 256
},
"use_lowercase": {
"type": "text",
"analyzer": "lowercase"
}
}
}
}
}
}
}
现在,我添加了2条具有相同Author(但大小写不同)的记录:
POST authors/famousbooks/1
{
"Book": "The Mysterious Affair at Styles",
"Year": 1920,
"Price": 5.92,
"Genre": "Crime Novel",
"Author": "Agatha Christie"
}
POST authors/famousbooks/2
{
"Book": "And Then There Were None",
"Year": 1939,
"Price": 6.99,
"Genre": "Mystery Novel",
"Author": "Agatha christie"
}
到现在为止还挺好。现在,如果我根据作者进行条款汇总,
GET authors/famousbooks/_search
{
"size": 0,
"aggs": {
"authors-aggs": {
"terms": {
"field": "Author.use_lowercase"
}
}
}
}
我得到以下结果:
{
"error": {
"root_cause": [
{
"type": "illegal_argument_exception",
"reason": "Fielddata is disabled on text fields by default. Set fielddata=true on [Author.use_lowercase] in order to load fielddata in memory by uninverting the inverted index. Note that this can however use significant memory."
}
],
"type": "search_phase_execution_exception",
"reason": "all shards failed",
"phase": "query",
"grouped": true,
"failed_shards": [
{
"shard": 0,
"index": "authors",
"node": "yxcoq_eKRL2r6JGDkshjxg",
"reason": {
"type": "illegal_argument_exception",
"reason": "Fielddata is disabled on text fields by default. Set fielddata=true on [Author.use_lowercase] in order to load fielddata in memory by uninverting the inverted index. Note that this can however use significant memory."
}
}
],
"caused_by": {
"type": "illegal_argument_exception",
"reason": "Fielddata is disabled on text fields by default. Set fielddata=true on [Author.use_lowercase] in order to load fielddata in memory by uninverting the inverted index. Note that this can however use significant memory."
}
},
"status": 400
}
因此,在我看来,聚合正在考虑搜索字段是文本而不是关键字,因此给了我字段数据警告。我认为ES足够复杂,可以识别条件字段实际上是一个关键字(通过自定义分析器),因此可以聚合,但是事实并非如此。
如果我将
"fielddata":true
添加到Author的映射中,则聚合可以正常工作,但是鉴于设置此值时会出现堆使用率过高的可怕警告,因此我很犹豫。是否有进行这种类型的不敏感关键字聚合的最佳实践?我希望我可以在映射部分中只说
"type":"keyword", "filter":"lowercase"
,但这似乎不可用。如果我走
"fielddata":true
路线,那感觉就像我不得不使用太大的棍子才能使它正常工作。任何帮助,将不胜感激! 最佳答案
事实证明,解决方案是使用自定义规范器而不是自定义分析器。
PUT authors
{
"settings": {
"analysis": {
"normalizer": {
"myLowercase": {
"type": "custom",
"filter": [ "lowercase" ]
}
}
}
},
"mappings": {
"famousbooks": {
"properties": {
"Author": {
"type": "text",
"fields": {
"keyword": {
"type": "keyword",
"ignore_above": 256
},
"use_lowercase": {
"type": "keyword",
"normalizer": "myLowercase",
"ignore_above": 256
}
}
}
}
}
}
}
然后,这将允许使用字段
Author.use_lowercase
进行术语聚合而不会出现问题。关于Elasticsearch 5.2.2 : terms aggregation case insensitive,我们在Stack Overflow上找到一个类似的问题:https://stackoverflow.com/questions/42517001/