问题描述
我在 scikit-learn
中的某些文档中安装了 CountVectorizer
.我想在文本语料库中查看所有术语及其对应的频率,以便选择停用词.例如
I have fitted a CountVectorizer
to some documents in scikit-learn
. I would like to see all the terms and their corresponding frequency in the text corpus, in order to select stop-words. For example
'and' 123 times, 'to' 100 times, 'for' 90 times, ... and so on
有内置功能吗?
推荐答案
如果 cv
是您的 CountVectorizer
,而 X
是矢量化语料库,然后
If cv
is your CountVectorizer
and X
is the vectorized corpus, then
zip(cv.get_feature_names(),
np.asarray(X.sum(axis=0)).ravel())
为 CountVectorizer
提取的语料库中的每个不同术语返回一个(项,频率)
对的列表.
returns a list of (term, frequency)
pairs for each distinct term in the corpus that the CountVectorizer
extracted.
(需要小的 asarray
+ ravel
舞蹈来解决 scipy.sparse
中的一些怪癖.)
(The little asarray
+ ravel
dance is needed to work around some quirks in scipy.sparse
.)
这篇关于使用Scikit-Learn CountVectorizer根据文本语料库中的出现情况列出词汇表中的单词的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!