问题描述
我已经为 scikit-learn
中的一些文档安装了一个 CountVectorizer
.我想在文本语料库中查看所有术语及其对应的频率,以便选择停用词.例如
I have fitted a CountVectorizer
to some documents in scikit-learn
. I would like to see all the terms and their corresponding frequency in the text corpus, in order to select stop-words. For example
'and' 123 times, 'to' 100 times, 'for' 90 times, ... and so on
是否有任何内置函数?
推荐答案
如果 cv
是你的 CountVectorizer
并且 X
是向量化的语料库,然后
If cv
is your CountVectorizer
and X
is the vectorized corpus, then
zip(cv.get_feature_names(),
np.asarray(X.sum(axis=0)).ravel())
为 CountVectorizer
提取的语料库中每个不同的词返回 (term, frequency)
对的列表.
returns a list of (term, frequency)
pairs for each distinct term in the corpus that the CountVectorizer
extracted.
(需要使用 asarray
+ ravel
来解决scipy.sparse
中的一些怪癖.)
(The little asarray
+ ravel
dance is needed to work around some quirks in scipy.sparse
.)
这篇关于使用 Scikit-Learn CountVectorizer 根据文本语料库中的出现次数列出词汇表中的单词的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!