我尝试使用gensim生成300000条记录的主题。在尝试使主题形象化时,出现验证错误。我可以在模型训练后打印主题,但是在使用pyLDAvis时失败
# Running and Training LDA model on the document term matrix.
ldamodel1 = Lda(doc_term_matrix1, num_topics=10, id2word = dictionary1, passes=50, workers = 4)
(ldamodel1.print_topics(num_topics=10, num_words = 10))
#pyLDAvis
d = gensim.corpora.Dictionary.load('dictionary1.dict')
c = gensim.corpora.MmCorpus('corpus.mm')
lda = gensim.models.LdaModel.load('topic.model')
#error on executing this line
data = pyLDAvis.gensim.prepare(lda, c, d)
在上面的pyLDAvis上运行后,我尝试以下错误
---------------------------------------------------------------------------
ValidationError Traceback (most recent call last)
<ipython-input-53-33fd88b65056> in <module>()
----> 1 data = pyLDAvis.gensim.prepare(lda, c, d)
2 data
C:\ProgramData\Anaconda3\lib\site-packages\pyLDAvis\gensim.py in prepare(topic_model, corpus, dictionary, doc_topic_dist, **kwargs)
110 """
111 opts = fp.merge(_extract_data(topic_model, corpus, dictionary, doc_topic_dist), kwargs)
--> 112 return vis_prepare(**opts)
C:\ProgramData\Anaconda3\lib\site-packages\pyLDAvis\_prepare.py in prepare(topic_term_dists, doc_topic_dists, doc_lengths, vocab, term_frequency, R, lambda_step, mds, n_jobs, plot_opts, sort_topics)
372 doc_lengths = _series_with_name(doc_lengths, 'doc_length')
373 vocab = _series_with_name(vocab, 'vocab')
--> 374 _input_validate(topic_term_dists, doc_topic_dists, doc_lengths, vocab, term_frequency)
375 R = min(R, len(vocab))
376
C:\ProgramData\Anaconda3\lib\site-packages\pyLDAvis\_prepare.py in _input_validate(*args)
63 res = _input_check(*args)
64 if res:
---> 65 raise ValidationError('\n' + '\n'.join([' * ' + s for s in res]))
66
67
ValidationError:
* Not all rows (distributions) in topic_term_dists sum to 1.
最佳答案
发生这种情况是因为pyLDAvis程序希望模型中的所有文档主题至少在语料库中显示一次。在生成语料库/文本之后以及在制作模型之前进行一些预处理时,可能会发生这种情况。
您提供的字典中未使用的模型内部字典中的单词将导致此操作失败,因为现在的概率略小于一个。
您可以通过以下方法解决此问题:将缺少的单词添加到语料库字典中(或将单词添加到语料库中并由此制成字典),也可以将此行添加到site-packages\pyLDAvis\gensim.py代码之前, topic_term_dists.shape [0] == doc_topic_dists.shape [1]“(应为〜67行)
topic_term_dists = topic_term_dists / topic_term_dists.sum(axis=1)[:, None]
假设您的代码一直运行到那时,这将使主题分布重新正常化而不会缺少dict项。但请注意,将所有术语包括在语料库中会更好。