我有这段代码用于计算与tf-idf的文本相似度。
from sklearn.feature_extraction.text import TfidfVectorizer
documents = [doc1,doc2]
tfidf = TfidfVectorizer().fit_transform(documents)
pairwise_similarity = tfidf * tfidf.T
print pairwise_similarity.A
问题是该代码将纯字符串作为输入,我想通过删除停用词,词干和tokkenize来准备文档。因此,输入将是一个列表。如果我用已分类的文档调用
documents = [doc1,doc2]
,则错误为: Traceback (most recent call last):
File "C:\Users\tasos\Desktop\my thesis\beta\similarity.py", line 18, in <module>
tfidf = TfidfVectorizer().fit_transform(documents)
File "C:\Python27\lib\site-packages\scikit_learn-0.14.1-py2.7-win32.egg\sklearn\feature_extraction\text.py", line 1219, in fit_transform
X = super(TfidfVectorizer, self).fit_transform(raw_documents)
File "C:\Python27\lib\site-packages\scikit_learn-0.14.1-py2.7-win32.egg\sklearn\feature_extraction\text.py", line 780, in fit_transform
vocabulary, X = self._count_vocab(raw_documents, self.fixed_vocabulary)
File "C:\Python27\lib\site-packages\scikit_learn-0.14.1-py2.7-win32.egg\sklearn\feature_extraction\text.py", line 715, in _count_vocab
for feature in analyze(doc):
File "C:\Python27\lib\site-packages\scikit_learn-0.14.1-py2.7-win32.egg\sklearn\feature_extraction\text.py", line 229, in <lambda>
tokenize(preprocess(self.decode(doc))), stop_words)
File "C:\Python27\lib\site-packages\scikit_learn-0.14.1-py2.7-win32.egg\sklearn\feature_extraction\text.py", line 195, in <lambda>
return lambda x: strip_accents(x.lower())
AttributeError: 'unicode' object has no attribute 'apply_freq_filter'
有什么办法可以更改代码并使其接受列表,还是我可以将已分类的文档再次更改为字符串?
最佳答案
尝试跳过预处理为小写形式,并提供自己的“nop” token 生成器:
tfidf = TfidfVectorizer(tokenizer=lambda doc: doc, lowercase=False).fit_transform(documents)
您还应该检查其他参数,例如
stop_words
,以避免重复您的预处理。关于python - python的tfidf算法,我们在Stack Overflow上找到一个类似的问题:https://stackoverflow.com/questions/18432289/