我正在将scikit-learn与Pipeline和FeatureUnion结合使用,以从不同的输入中提取要素。我的数据集中的每个样本(实例)都引用了不同长度的文档。我的目标是独立计算每个文档的顶部tfidf,但我一直收到以下错误消息:



训练数据的大小为2000。
这是主要代码:

book_summary= Pipeline([
   ('selector', ItemSelector(key='book')),
   ('tfidf', TfidfVectorizer(analyzer='word', ngram_range(1,3), min_df=1, lowercase=True, stop_words=my_stopword_list, sublinear_tf=True))
])

book_contents= Pipeline([('selector3', book_content_count())])

ppl = Pipeline([
    ('feats', FeatureUnion([
         ('book_summary', book_summary),
         ('book_contents', book_contents)])),
    ('clf', SVC(kernel='linear', class_weight='balanced') ) # classifier with cross fold 5
])

我编写了两个类来处理每个管道功能。我的问题是book_contents管道,该管道主要处理每个样本,并分别为每本书返回TFidf矩阵。
class book_content_count():
  def count_contents2(self, bookid):
        book = open('C:/TheCorpus/'+str(int(bookid))+'_book.csv', 'r')
        book_data = pd.read_csv(book, header=0, delimiter=',', encoding='latin1',error_bad_lines=False,dtype=str)
                      corpus=(str([user_data['text']]).strip('[]'))
        return corpus

    def transform(self, data_dict, y=None):
        data_dict['bookid'] #from here take the name
        text=data_dict['bookid'].apply(self.count_contents2)
        vec_pipe= Pipeline([('vec', TfidfVectorizer(min_df = 1,lowercase = False, ngram_range = (1,1), use_idf = True, stop_words='english'))])
        Xtr = vec_pipe.fit_transform(text)
        return Xtr

    def fit(self, x, y=None):
        return self

数据样本(示例):
title                         Summary                          bookid
The beauty and the beast      is a traditional fairy tale...    10
ocean at the end of the lane  is a 2013 novel by British        11

然后,每个ID都会引用一个文本文件,其中包含这些书籍的实际内容

我已经尝试过toarrayreshape函数,但是没有运气。任何想法如何解决这个问题。
谢谢

最佳答案

您可以将Neuraxle's Feature Union与需要自己编写的自定义连接器一起使用。细木工是传递给Neuraxle的FeatureUnion的类,以您期望的方式将结果合并在一起。

1.导入Neuraxle的类(class)。

from neuraxle.base import NonFittableMixin, BaseStep
from neuraxle.pipeline import Pipeline
from neuraxle.steps.sklearn import SKLearnWrapper
from neuraxle.union import FeatureUnion

2.通过继承BaseStep来定义您的自定义类:
class BookContentCount(BaseStep):

    def transform(self, data_dict, y=None):
        transformed = do_things(...)  # be sure to use SKLearnWrapper if you wrap sklearn items.
        return transformed

    def fit(self, x, y=None):
        return self

3.创建一个联接器,以您希望的方式联接要素联合的结果:
class CustomJoiner(NonFittableMixin, BaseStep):
    def __init__(self):
        BaseStep.__init__(self)
        NonFittableMixin.__init__(self)

    # def fit: is inherited from `NonFittableMixin` and simply returns self.

    def transform(self, data_inputs):
        # TODO: insert your own concatenation method here.
        result = np.concatenate(data_inputs, axis=-1)
        return result

4.最后,通过将联接器传递给FeatureUnion来创建管道:
book_summary= Pipeline([
    ItemSelector(key='book'),
    TfidfVectorizer(analyzer='word', ngram_range(1,3), min_df=1, lowercase=True, stop_words=my_stopword_list, sublinear_tf=True)
])

p = Pipeline([
    FeatureUnion([
        book_summary,
        BookContentCount()
    ],
        joiner=CustomJoiner()
    ),
    SVC(kernel='linear', class_weight='balanced')
])

注意:如果您希望您的Neuraxle管道重新成为scikit-learn管道,则可以执行p = p.tosklearn()

要了解有关Neuraxle的更多信息:
https://github.com/Neuraxio/Neuraxle

文档中的更多示例:
https://www.neuraxle.org/stable/examples/index.html

关于python-3.x - 如何使用scikit-learn组合具有不同尺寸输出的特征,我们在Stack Overflow上找到一个类似的问题:https://stackoverflow.com/questions/50434661/

10-15 23:27