问题描述
我正在尝试获得Spark LDA模型(使用Spark 2.1)的困惑和对数.尽管我可以保存模型,但是下面的代码不起作用(找不到方法logLikelihood
和logPerplexity
).
I'm trying to get perplexity and log likelihood of a Spark LDA model (with Spark 2.1). The code below does not work (methods logLikelihood
and logPerplexity
not found) although I can save the model.
from pyspark.mllib.clustering import LDA
from pyspark.mllib.linalg import Vectors
# construct corpus
# run LDA
ldaModel = LDA.train(corpus, k=10, maxIterations=10)
logll = ldaModel.logLikelihood(corpus)
perplexity = ldaModel.logPerplexity(corpus)
请注意,dir(LDA)
不会提供此类方法.
Notice that such methods do not come up with dir(LDA)
.
什么是可行的示例?
推荐答案
那是因为您正在使用旧的基于RDD的API(MLlib),即
That's because you are working with the old, RDD-based API (MLlib), i.e.
from pyspark.mllib.clustering import LDA # WRONG import
其LDA
类确实不包含fit
,logLikelihood
或logPerplexity
方法.
whose LDA
class indeed does not include fit
, logLikelihood
, or logPerplexity
methods.
为了使用这些方法,您应该切换到新的基于数据框的API(ML):
In order to work with these methods, you should switch to the new, dataframe-based API (ML):
from pyspark.ml.clustering import LDA # NOTE: different import
# Loads data.
dataset = (spark.read.format("libsvm")
.load("data/mllib/sample_lda_libsvm_data.txt"))
# Trains a LDA model.
lda = LDA(k=10, maxIter=10)
model = lda.fit(dataset)
ll = model.logLikelihood(dataset)
lp = model.logPerplexity(dataset)
这篇关于log Likelihood和logPerplexity方法不适用于Spark LDA,如何测量它们?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!