本文介绍了文档相似度-多个文档以相似度得分结尾的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我一直在处理一个业务问题,我需要找到新文档与现有文档的相似性.我使用了以下各种方法

I have been working in a business problem where i need to find a similarity of new document with existing one.I have used various approach as below

1.单词包+余弦相似度

1.Bag of words + Cosine similarity

2.TFIDF +余弦相似度

2.TFIDF + Cosine similarity

3.Word2Vec +余弦相似度

3.Word2Vec + Cosine similarity

他们都没有按预期工作.但是最后我找到了一种更好的方法 Word2vec +软余弦相似度

None of them worked as expected.But finally i found an approach which works better itsWord2vec + Soft cosine similarity

但是新的挑战是我最终得到了具有相似相似度得分的多个文档.它们大多数是相关的,但即使它们在语义上相似,但它们却很少

But the new challenge is i ended up with multiple documents with same similarity score.Most of them are relevant but few of them even though having some semantically similar words they are different

请建议如何解决这个问题

Please suggest how to over come this issue

推荐答案

如果目标是识别语义相似性,则以下代码源自此处有帮助.

If the objective is to identify semantic similarity, the following code sourced from here helps.

#invoke libraries
from nltk import pos_tag, word_tokenize
from nltk.corpus import wordnet as wn

#Build functions
def ptb_to_wn(tag):
    if tag.startswith('N'):
        return 'n'
    if tag.startswith('V'):
        return 'v'
    if tag.startswith('J'):
        return 'a'
    if tag.startswith('R'):
        return 'r'
    return None


def tagged_to_synset(word, tag):
    wn_tag = ptb_to_wn(tag)
    if wn_tag is None:
        return None
    try:
        return wn.synsets(word, wn_tag)[0]
    except:
        return None


def sentence_similarity(s1, s2):
    s1 = pos_tag(word_tokenize(s1))
    s2 = pos_tag(word_tokenize(s2))

    synsets1 = [tagged_to_synset(*tagged_word) for tagged_word in s1]
    synsets2 = [tagged_to_synset(*tagged_word) for tagged_word in s2]

    #suppress "none"
    synsets1 = [ss for ss in synsets1 if ss]
    synsets2 = [ss for ss in synsets2 if ss]

    score, count = 0.0, 0

    for synset in synsets1:
        best_score = max([synset.path_similarity(ss) for ss in synsets2])
        if best_score is not None:
            score += best_score
            count += 1

    # Average the values
    score /= count
    return score

#Build function to compute the symmetric sentence similarity
def symSentSim(s1, s2):
    sss_score = (sentence_similarity(s1, s2) + sentence_similarity(s2,s1)) / 2
    return (sss_score)

#Example
s1 = 'We rented a vehicle to drive to Goa'
s2 = 'The car broke down on our jouney'

s1tos2 = symSentSim(s1, s2)

print(s1tos2)
#0.155753968254

这篇关于文档相似度-多个文档以相似度得分结尾的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!

08-21 12:03