我正在使用Lucene的Highlighter类来突出显示匹配的搜索结果的片段,并且效果很好。我想从使用StandardAnalyzer进行搜索切换到EnglishAnalyzer,它将执行词干分析。

搜索结果不错,但是荧光笔现在并不总是找到匹配项。这是我正在查看的示例:

document field text 1: Everyone likes goats.

document field text 2: I have a goat that eats everything.

使用EnglishAnalyzer并搜索“山羊”,两个文档都已匹配,但是荧光笔只能从文档2中找到匹配的片段。是否有办法让荧光笔返回两个文档的数据?

我知道 token 的字符不同,但是仍然存在相同的 token ,因此仅突出显示该位置存在的 token 似乎是合理的。

如果有帮助,请使用Lucene 3.5。

最佳答案

我找到了解决该问题的方法。我从使用Highlighter类更改为使用FastVectorHighlighter。看来我也会提高速度(以存储术语向量数据为代价)。为了以后遇到这个问题的任何人的利益,这里有一个单元测试,显示了这一切如何协同工作:

package com.sample.index;

import org.apache.lucene.analysis.Analyzer;
import org.apache.lucene.analysis.en.EnglishAnalyzer;
import org.apache.lucene.document.Document;
import org.apache.lucene.document.Field;
import org.apache.lucene.index.IndexReader;
import org.apache.lucene.index.IndexWriter;
import org.apache.lucene.index.IndexWriterConfig;
import org.apache.lucene.queryParser.ParseException;
import org.apache.lucene.queryParser.QueryParser;
import org.apache.lucene.search.IndexSearcher;
import org.apache.lucene.search.Query;
import org.apache.lucene.search.ScoreDoc;
import org.apache.lucene.search.TopDocs;
import org.apache.lucene.search.vectorhighlight.*;
import org.apache.lucene.store.RAMDirectory;
import org.apache.lucene.util.Version;
import org.junit.Before;
import org.junit.Test;

import java.io.IOException;
import java.util.ArrayList;
import java.util.List;

import static junit.framework.Assert.assertEquals;

public class TestIndexStuff {
    public static final String FIELD_NORMAL = "normal";
    public static final String[] PRE_TAGS = new String[]{"["};
    public static final String[] POST_TAGS = new String[]{"]"};
    private IndexSearcher searcher;
    private Analyzer analyzer = new EnglishAnalyzer(Version.LUCENE_35);

    @Before
    public void init() throws IOException {
        RAMDirectory idx = new RAMDirectory();
        IndexWriterConfig config = new IndexWriterConfig(Version.LUCENE_35, analyzer);

        IndexWriter writer = new IndexWriter(idx, config);
        addDocs(writer);
        writer.close();

        searcher = new IndexSearcher(IndexReader.open(idx));
    }

    private void addDocs(IndexWriter writer) throws IOException {
        for (String text : new String[] {
              "Pretty much everyone likes goats.",
              "I have a goat that eats everything.",
              "goats goats goats goats goats"}) {
            Document doc = new Document();
            doc.add(new Field(FIELD_NORMAL, text, Field.Store.YES,
                    Field.Index.ANALYZED, Field.TermVector.WITH_POSITIONS_OFFSETS));
            writer.addDocument(doc);
        }
    }

    private FastVectorHighlighter makeHighlighter() {
        FragListBuilder fragListBuilder = new SimpleFragListBuilder(200);
        FragmentsBuilder fragmentBuilder = new SimpleFragmentsBuilder(PRE_TAGS, POST_TAGS);
        return new FastVectorHighlighter(true, true, fragListBuilder, fragmentBuilder);
    }

    @Test
    public void highlight() throws ParseException, IOException {
        Query query = new QueryParser(Version.LUCENE_35, FIELD_NORMAL, analyzer)
                    .parse("goat");
        FastVectorHighlighter highlighter = makeHighlighter();
        FieldQuery fieldQuery = highlighter.getFieldQuery(query);

        TopDocs topDocs = searcher.search(query, 10);
        List<String> fragments = new ArrayList<String>();
        for (ScoreDoc scoreDoc : topDocs.scoreDocs) {
            fragments.add(highlighter.getBestFragment(fieldQuery, searcher.getIndexReader(),
                    scoreDoc.doc, FIELD_NORMAL, 10000));
        }

        assertEquals(3, fragments.size());
        assertEquals("[goats] [goats] [goats] [goats] [goats]", fragments.get(0).trim());
        assertEquals("Pretty much everyone likes [goats].", fragments.get(1).trim());
        assertEquals("I have a [goat] that eats everything.", fragments.get(2).trim());
    }
}

关于带有词干分析器的Lucene荧光笔,我们在Stack Overflow上找到一个类似的问题:https://stackoverflow.com/questions/10339704/

10-10 15:22