我使用sklearn.feature_extraction.text.CountVectorizer计算n-gram。例子:
import sklearn.feature_extraction.text # FYI http://scikit-learn.org/stable/install.html
ngram_size = 4
string = ["I really like python, it's pretty awesome."]
vect = sklearn.feature_extraction.text.CountVectorizer(ngram_range=(ngram_size,ngram_size))
vect.fit(string)
print('{1}-grams: {0}'.format(vect.get_feature_names(), ngram_size))
输出:
4-grams: [u'like python it pretty', u'python it pretty awesome', u'really like python it']
标点符号被删除:如何将它们作为单独的标记包括在内?
最佳答案
您应该使用tokenizer
参数指定一个tokenizer单词,该单词在创建sklearn.feature_extraction.text.CountVectorizer实例时将所有标点符号视为一个单独的标记。
例如, nltk.tokenize.TreebankWordTokenizer
将大多数标点符号视为单独的标记:
import sklearn.feature_extraction.text
from nltk.tokenize import TreebankWordTokenizer
ngram_size = 4
string = ["I really like python, it's pretty awesome."]
vect = sklearn.feature_extraction.text.CountVectorizer(ngram_range=(ngram_size,ngram_size), \
tokenizer=TreebankWordTokenizer().tokenize)
print('{1}-grams: {0}'.format(vect.get_feature_names(), ngram_size))
输出:
4-grams: [u"'s pretty awesome .", u", it 's pretty", u'i really like python',
u"it 's pretty awesome", u'like python , it', u"python , it 's",
u'really like python ,']
关于python - 如何使用sklearn的CountVectorizerand()获取包含任何标点符号的ngram作为单独的标记?,我们在Stack Overflow上找到一个类似的问题:https://stackoverflow.com/questions/32128802/