问题描述
我是PySpark的新手.我在玩tfidf.只是想检查他们是否给出了相同的结果.但是他们不一样.这就是我所做的.
I'm new to PySpark. I was playing around with the tfidf. Just wanted to check if they are giving same results. But they are not same. Here's what I did.
# create the PySpark dataframe
sentenceData = sqlContext.createDataFrame((
(0.0, "Hi I heard about Spark"),
(0.0, "I wish Java could use case classes"),
(1.0, "Logistic regression models are neat")
)).toDF("label", "sentence")
# tokenize
tokenizer = Tokenizer().setInputCol("sentence").setOutputCol("words")
wordsData = tokenizer.transform(sentenceData)
# vectorize
vectorizer = CountVectorizer(inputCol='words', outputCol='vectorizer').fit(wordsData)
wordsData = vectorizer.transform(wordsData)
# calculate scores
idf = IDF(inputCol="vectorizer", outputCol="tfidf_features")
idf_model = idf.fit(wordsData)
wordsData = idf_model.transform(wordsData)
# dense the current response variable
def to_dense(in_vec):
return DenseVector(in_vec.toArray())
to_dense_udf = udf(lambda x: to_dense(x), VectorUDT())
# create dense vector
wordsData = wordsData.withColumn("tfidf_features_dense", to_dense_udf('tfidf_features'))
我将PySpark df转换为熊猫
I converted the PySpark df to pandas
wordsData_pandas = wordsData.toPandas()
然后,只需使用sklearn的tfidf进行如下计算
and, then just calculating using sklearn's tfidf as following
def dummy_fun(doc):
return doc
# create sklearn tfidf
tfidf = TfidfVectorizer(
analyzer='word',
tokenizer=dummy_fun,
preprocessor=dummy_fun,
token_pattern=None)
# transform and get idf scores
feature_matrix = tfidf.fit_transform(wordsData_pandas.words)
# create sklearn dtm matrix
sklearn_tfifdf = pd.DataFrame(feature_matrix.toarray(), columns=tfidf.get_feature_names())
# create PySpark dtm matrix
spark_tfidf = pd.DataFrame([np.array(i) for i in wordsData_pandas.tfidf_features_dense], columns=vectorizer.vocabulary)
但不幸的是,我正在为PySpark买到这个
But unfortunately, I'm getting this for PySpark
<table border="1" class="dataframe"> <thead> <tr style="text-align: right;"> <th></th> <th>i</th> <th>are</th> <th>logistic</th> <th>case</th> <th>spark</th> <th>hi</th> <th>about</th> <th>neat</th> <th>could</th> <th>regression</th> <th>wish</th> <th>use</th> <th>heard</th> <th>classes</th> <th>java</th> <th>models</th> </tr> </thead> <tbody> <tr> <th>0</th> <td>0.287682</td> <td>0.000000</td> <td>0.000000</td> <td>0.000000</td> <td>0.693147</td> <td>0.693147</td> <td>0.693147</td> <td>0.000000</td> <td>0.000000</td> <td>0.000000</td> <td>0.000000</td> <td>0.000000</td> <td>0.693147</td> <td>0.000000</td> <td>0.000000</td> <td>0.000000</td> </tr> <tr> <th>1</th> <td>0.287682</td> <td>0.000000</td> <td>0.000000</td> <td>0.693147</td> <td>0.000000</td> <td>0.000000</td> <td>0.000000</td> <td>0.000000</td> <td>0.693147</td> <td>0.000000</td> <td>0.693147</td> <td>0.693147</td> <td>0.000000</td> <td>0.693147</td> <td>0.693147</td> <td>0.000000</td> </tr> <tr> <th>2</th> <td>0.000000</td> <td>0.693147</td> <td>0.693147</td> <td>0.000000</td> <td>0.000000</td> <td>0.000000</td> <td>0.000000</td> <td>0.693147</td> <td>0.000000</td> <td>0.693147</td> <td>0.000000</td> <td>0.000000</td> <td>0.000000</td> <td>0.000000</td> <td>0.000000</td> <td>0.693147</td> </tr> </tbody></table>
这是sklearn,
<table border="1" class="dataframe"> <thead> <tr style="text-align: right;"> <th></th> <th>i</th> <th>are</th> <th>logistic</th> <th>case</th> <th>spark</th> <th>hi</th> <th>about</th> <th>neat</th> <th>could</th> <th>regression</th> <th>wish</th> <th>use</th> <th>heard</th> <th>classes</th> <th>java</th> <th>models</th> </tr> </thead> <tbody> <tr> <th>0</th> <td>0.355432</td> <td>0.000000</td> <td>0.000000</td> <td>0.000000</td> <td>0.467351</td> <td>0.467351</td> <td>0.467351</td> <td>0.000000</td> <td>0.000000</td> <td>0.000000</td> <td>0.000000</td> <td>0.000000</td> <td>0.467351</td> <td>0.000000</td> <td>0.000000</td> <td>0.000000</td> </tr> <tr> <th>1</th> <td>0.296520</td> <td>0.000000</td> <td>0.000000</td> <td>0.389888</td> <td>0.000000</td> <td>0.000000</td> <td>0.000000</td> <td>0.000000</td> <td>0.389888</td> <td>0.000000</td> <td>0.389888</td> <td>0.389888</td> <td>0.000000</td> <td>0.389888</td> <td>0.389888</td> <td>0.000000</td> </tr> <tr> <th>2</th> <td>0.000000</td> <td>0.447214</td> <td>0.447214</td> <td>0.000000</td> <td>0.000000</td> <td>0.000000</td> <td>0.000000</td> <td>0.447214</td> <td>0.000000</td> <td>0.447214</td> <td>0.000000</td> <td>0.000000</td> <td>0.000000</td> <td>0.000000</td> <td>0.000000</td> <td>0.447214</td> </tr> </tbody></table>
我确实尝试了use_idf
,smooth_idf
参数.但是似乎两者都不一样.我想念什么?任何帮助表示赞赏.预先感谢.
I did try out the use_idf
, smooth_idf
paramters. But none seem to make both same. What am I missing? Any help is appreciated. Thanks in advance.
推荐答案
这是因为IDF的计算方式在两者之间略有不同.
That's because the IDFs are calculated a little differently between the two.
从sklearn的文档:
与pyspark的文档:
除了在IDF中添加1之外,sklearn TF-IDF还使用了pyspark没有的l2规范
Besides the addition of the 1 in the IDF the sklearn TF-IDF uses the l2 norm which pyspark doesn't
TfidfTransformer(norm='l2', use_idf=True, smooth_idf=True, sublinear_tf=False)
这篇关于PySpark和Sklearn TFIDF的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!