我正在关注有关使用tensorflow进行文本分类的wildml博客。我无法在代码语句中了解max_document_length的目的:

vocab_processor = learn.preprocessing.VocabularyProcessor(max_document_length)


另外我怎么能从vocab_processor中提取词汇

最佳答案

我已经弄清楚了如何从vocabularyprocessor对象中提取词汇。这对我来说非常有效。

import numpy as np
from tensorflow.contrib import learn

x_text = ['This is a cat','This must be boy', 'This is a a dog']
max_document_length = max([len(x.split(" ")) for x in x_text])

## Create the vocabularyprocessor object, setting the max lengh of the documents.
vocab_processor = learn.preprocessing.VocabularyProcessor(max_document_length)

## Transform the documents using the vocabulary.
x = np.array(list(vocab_processor.fit_transform(x_text)))

## Extract word:id mapping from the object.
vocab_dict = vocab_processor.vocabulary_._mapping

## Sort the vocabulary dictionary on the basis of values(id).
## Both statements perform same task.
#sorted_vocab = sorted(vocab_dict.items(), key=operator.itemgetter(1))
sorted_vocab = sorted(vocab_dict.items(), key = lambda x : x[1])

## Treat the id's as index into list and create a list of words in the ascending order of id's
## word with id i goes at index i of the list.
vocabulary = list(list(zip(*sorted_vocab))[0])

print(vocabulary)
print(x)

关于tensorflow - Tensorflow词汇处理器,我们在Stack Overflow上找到一个类似的问题:https://stackoverflow.com/questions/40661684/

10-12 07:35
查看更多