问题描述
我为scikit-learn
中的某些文档安装了CountVectorizer
.我想在文本语料库中查看所有术语及其对应的频率,以便选择停用词.例如
I have fitted a CountVectorizer
to some documents in scikit-learn
. I would like to see all the terms and their corresponding frequency in the text corpus, in order to select stop-words. For example
'and' 123 times, 'to' 100 times, 'for' 90 times, ... and so on
有内置功能吗?
推荐答案
如果cv
是您的CountVectorizer
,而X
是向量化语料库,则
If cv
is your CountVectorizer
and X
is the vectorized corpus, then
zip(cv.get_feature_names(),
np.asarray(X.sum(axis=0)).ravel())
为CountVectorizer
提取的语料库中的每个不同术语返回一个(term, frequency)
对的列表.
returns a list of (term, frequency)
pairs for each distinct term in the corpus that the CountVectorizer
extracted.
(需要一些asarray
+ ravel
小舞来解决scipy.sparse
中的一些怪癖.)
(The little asarray
+ ravel
dance is needed to work around some quirks in scipy.sparse
.)
这篇关于根据文本语料库中的出现情况列出词汇表中的单词,Scikit-Learn的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!