本文介绍了如何为不同类别的 scikit-learn 分类器获得最有用的特征?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

NLTK 包提供了一种方法 show_most_informative_features() 来查找两个类中最重要的特征,输出如下:

NLTK package provides a method show_most_informative_features() to find the most important features for both class, with output like:

   contains(outstanding) = True              pos : neg    =     11.1 : 1.0
        contains(seagal) = True              neg : pos    =      7.7 : 1.0
   contains(wonderfully) = True              pos : neg    =      6.8 : 1.0
         contains(damon) = True              pos : neg    =      5.9 : 1.0
        contains(wasted) = True              neg : pos    =      5.8 : 1.0

在这个问题中回答如何获得最多scikit-learn 分类器的信息特征?,这也适用于 scikit-learn.但是,对于二元分类器,该问题的答案仅输出最佳特征本身.

As answered in this question How to get most informative features for scikit-learn classifiers? , this can also work in scikit-learn. However, for binary classifier, the answer in that question only outputs the best feature itself.

所以我的问题是,我如何识别特征的关联类,如上面的示例(优秀在 pos 类中信息量最大,而 seagal 在否定类中信息量最大)?

So my question is, how can I identify the feature's associated class, like the example above (outstanding is most informative in pos class, and seagal is most informative in negative class)?

实际上我想要的是每个班级最有用的单词列表.我怎样才能做到这一点?谢谢!

actually what I want is a list of most informative words for each class. How can I do that? Thanks!

推荐答案

在二元分类的情况下,系数数组好像被拉平了.

In the case of binary classification, it seems like the coefficient array has been flatten.

让我们尝试只用两个标签重新标记我们的数据:

Let's try to relabel our data with only two labels:

import codecs, re, time
from itertools import chain

import numpy as np

from sklearn.feature_extraction.text import CountVectorizer
from sklearn.naive_bayes import MultinomialNB

trainfile = 'train.txt'

# Vectorizing data.
train = []
word_vectorizer = CountVectorizer(analyzer='word')
trainset = word_vectorizer.fit_transform(codecs.open(trainfile,'r','utf8'))
tags = ['bs','pt','bs','pt']

# Training NB
mnb = MultinomialNB()
mnb.fit(trainset, tags)

print mnb.classes_
print mnb.coef_[0]
print mnb.coef_[1]

[输出]:

['bs' 'pt']
[-5.55682806 -4.86368088 -4.86368088 -5.55682806 -5.55682806 -5.55682806
 -4.86368088 -4.86368088 -5.55682806 -5.55682806 -4.86368088 -4.86368088
 -4.1705337  -5.55682806 -4.86368088 -5.55682806 -4.86368088 -5.55682806
 -5.55682806 -5.55682806 -4.86368088 -4.45821577 -4.86368088 -4.86368088
 -4.86368088 -4.86368088 -5.55682806 -4.86368088 -5.55682806 -4.86368088
 -4.86368088 -4.86368088 -4.86368088 -4.86368088 -5.55682806 -5.55682806
 -5.55682806 -5.55682806 -5.55682806 -4.45821577 -4.86368088 -4.86368088
 -4.86368088 -4.86368088 -4.86368088 -5.55682806 -5.55682806 -4.86368088
 -4.86368088 -4.86368088 -4.86368088 -5.55682806 -4.86368088 -4.86368088
 -4.86368088 -5.55682806 -5.55682806 -5.55682806 -5.55682806 -5.55682806
 -5.55682806 -5.55682806 -5.55682806 -4.86368088 -4.86368088 -4.86368088
 -4.86368088 -5.55682806 -5.55682806 -4.86368088 -5.55682806 -4.86368088
 -5.55682806 -5.55682806 -4.86368088 -4.86368088 -4.45821577 -4.86368088
 -4.86368088 -4.45821577 -4.86368088 -4.86368088 -4.86368088 -5.55682806
 -4.86368088 -5.55682806 -5.55682806 -4.86368088 -5.55682806 -5.55682806
 -4.86368088 -5.55682806 -4.86368088 -4.86368088 -4.86368088 -5.55682806
 -5.55682806 -5.55682806 -4.86368088 -4.86368088 -5.55682806 -4.86368088
 -5.55682806 -4.86368088 -5.55682806 -4.86368088 -5.55682806 -5.55682806
 -5.55682806 -4.86368088 -4.86368088 -5.55682806 -4.86368088 -4.86368088
 -4.86368088 -4.1705337  -4.86368088 -4.86368088 -5.55682806 -4.86368088
 -4.86368088 -4.86368088 -4.86368088 -4.86368088 -5.55682806 -4.86368088
 -4.86368088 -4.86368088 -5.55682806 -4.86368088 -4.86368088 -4.86368088
 -4.86368088 -4.86368088 -4.86368088 -5.55682806 -4.86368088 -4.86368088
 -5.55682806 -5.55682806 -4.86368088 -4.86368088 -4.86368088 -4.86368088
 -4.86368088 -4.86368088 -5.55682806 -4.86368088 -4.86368088 -5.55682806
 -4.86368088 -4.45821577 -4.86368088 -4.86368088]
Traceback (most recent call last):
  File "test.py", line 24, in <module>
    print mnb.coef_[1]
IndexError: index 1 is out of bounds for axis 0 with size 1

让我们做一些诊断:

print mnb.feature_count_
print mnb.coef_[0]

[输出]:

[[ 1.  0.  0.  1.  1.  1.  0.  0.  1.  1.  0.  0.  0.  1.  0.  1.  0.  1.
   1.  1.  2.  2.  0.  0.  0.  1.  1.  0.  1.  0.  0.  0.  0.  0.  2.  1.
   1.  1.  1.  0.  0.  0.  0.  0.  0.  1.  1.  0.  0.  0.  0.  1.  0.  0.
   0.  1.  1.  1.  1.  1.  1.  1.  1.  0.  0.  0.  0.  1.  1.  0.  1.  0.
   1.  2.  0.  0.  0.  0.  0.  0.  0.  0.  0.  1.  0.  1.  1.  0.  1.  1.
   0.  1.  0.  0.  0.  1.  1.  1.  0.  0.  1.  0.  1.  0.  1.  0.  1.  1.
   1.  0.  0.  1.  0.  0.  0.  4.  0.  0.  1.  0.  0.  0.  0.  0.  1.  0.
   0.  0.  1.  0.  0.  0.  0.  0.  0.  1.  0.  0.  1.  1.  0.  0.  0.  0.
   0.  0.  1.  0.  0.  1.  0.  0.  0.  0.]
 [ 0.  1.  1.  0.  0.  0.  1.  1.  0.  0.  1.  1.  3.  0.  1.  0.  1.  0.
   0.  0.  1.  2.  1.  1.  1.  1.  0.  1.  0.  1.  1.  1.  1.  1.  0.  0.
   0.  0.  0.  2.  1.  1.  1.  1.  1.  0.  0.  1.  1.  1.  1.  0.  1.  1.
   1.  0.  0.  0.  0.  0.  0.  0.  0.  1.  1.  1.  1.  0.  0.  1.  0.  1.
   0.  0.  1.  1.  2.  1.  1.  2.  1.  1.  1.  0.  1.  0.  0.  1.  0.  0.
   1.  0.  1.  1.  1.  0.  0.  0.  1.  1.  0.  1.  0.  1.  0.  1.  0.  0.
   0.  1.  1.  0.  1.  1.  1.  3.  1.  1.  0.  1.  1.  1.  1.  1.  0.  1.
   1.  1.  0.  1.  1.  1.  1.  1.  1.  0.  1.  1.  0.  0.  1.  1.  1.  1.
   1.  1.  0.  1.  1.  0.  1.  2.  1.  1.]]
[-5.55682806 -4.86368088 -4.86368088 -5.55682806 -5.55682806 -5.55682806
 -4.86368088 -4.86368088 -5.55682806 -5.55682806 -4.86368088 -4.86368088
 -4.1705337  -5.55682806 -4.86368088 -5.55682806 -4.86368088 -5.55682806
 -5.55682806 -5.55682806 -4.86368088 -4.45821577 -4.86368088 -4.86368088
 -4.86368088 -4.86368088 -5.55682806 -4.86368088 -5.55682806 -4.86368088
 -4.86368088 -4.86368088 -4.86368088 -4.86368088 -5.55682806 -5.55682806
 -5.55682806 -5.55682806 -5.55682806 -4.45821577 -4.86368088 -4.86368088
 -4.86368088 -4.86368088 -4.86368088 -5.55682806 -5.55682806 -4.86368088
 -4.86368088 -4.86368088 -4.86368088 -5.55682806 -4.86368088 -4.86368088
 -4.86368088 -5.55682806 -5.55682806 -5.55682806 -5.55682806 -5.55682806
 -5.55682806 -5.55682806 -5.55682806 -4.86368088 -4.86368088 -4.86368088
 -4.86368088 -5.55682806 -5.55682806 -4.86368088 -5.55682806 -4.86368088
 -5.55682806 -5.55682806 -4.86368088 -4.86368088 -4.45821577 -4.86368088
 -4.86368088 -4.45821577 -4.86368088 -4.86368088 -4.86368088 -5.55682806
 -4.86368088 -5.55682806 -5.55682806 -4.86368088 -5.55682806 -5.55682806
 -4.86368088 -5.55682806 -4.86368088 -4.86368088 -4.86368088 -5.55682806
 -5.55682806 -5.55682806 -4.86368088 -4.86368088 -5.55682806 -4.86368088
 -5.55682806 -4.86368088 -5.55682806 -4.86368088 -5.55682806 -5.55682806
 -5.55682806 -4.86368088 -4.86368088 -5.55682806 -4.86368088 -4.86368088
 -4.86368088 -4.1705337  -4.86368088 -4.86368088 -5.55682806 -4.86368088
 -4.86368088 -4.86368088 -4.86368088 -4.86368088 -5.55682806 -4.86368088
 -4.86368088 -4.86368088 -5.55682806 -4.86368088 -4.86368088 -4.86368088
 -4.86368088 -4.86368088 -4.86368088 -5.55682806 -4.86368088 -4.86368088
 -5.55682806 -5.55682806 -4.86368088 -4.86368088 -4.86368088 -4.86368088
 -4.86368088 -4.86368088 -5.55682806 -4.86368088 -4.86368088 -5.55682806
 -4.86368088 -4.45821577 -4.86368088 -4.86368088]

似乎对特征进行了计数,然后在矢量化时将其展平以节省内存,所以让我们尝试:

Seems like the features are counted and then when vectorized it was flattened to save memory, so let's try:

index = 0
coef_features_c1_c2 = []

for feat, c1, c2 in zip(word_vectorizer.get_feature_names(), mnb.feature_count_[0], mnb.feature_count_[1]):
    coef_features_c1_c2.append(tuple([mnb.coef_[0][index], feat, c1, c2]))
    index+=1

for i in sorted(coef_features_c1_c2):
    print i

[输出]:

(-5.5568280616995374, u'acuerdo', 1.0, 0.0)
(-5.5568280616995374, u'al', 1.0, 0.0)
(-5.5568280616995374, u'alex', 1.0, 0.0)
(-5.5568280616995374, u'algo', 1.0, 0.0)
(-5.5568280616995374, u'andaba', 1.0, 0.0)
(-5.5568280616995374, u'andrea', 1.0, 0.0)
(-5.5568280616995374, u'bien', 1.0, 0.0)
(-5.5568280616995374, u'buscando', 1.0, 0.0)
(-5.5568280616995374, u'como', 1.0, 0.0)
(-5.5568280616995374, u'con', 1.0, 0.0)
(-5.5568280616995374, u'conseguido', 1.0, 0.0)
(-5.5568280616995374, u'distancia', 1.0, 0.0)
(-5.5568280616995374, u'doprinese', 1.0, 0.0)
(-5.5568280616995374, u'es', 2.0, 0.0)
(-5.5568280616995374, u'estxe1', 1.0, 0.0)
(-5.5568280616995374, u'eulex', 1.0, 0.0)
(-5.5568280616995374, u'excusa', 1.0, 0.0)
(-5.5568280616995374, u'fama', 1.0, 0.0)
(-5.5568280616995374, u'guasch', 1.0, 0.0)
(-5.5568280616995374, u'ha', 1.0, 0.0)
(-5.5568280616995374, u'incident', 1.0, 0.0)
(-5.5568280616995374, u'ispit', 1.0, 0.0)
(-5.5568280616995374, u'istragu', 1.0, 0.0)
(-5.5568280616995374, u'izbijanju', 1.0, 0.0)
(-5.5568280616995374, u'jau010danju', 1.0, 0.0)
(-5.5568280616995374, u'je', 1.0, 0.0)
(-5.5568280616995374, u'jedan', 1.0, 0.0)
(-5.5568280616995374, u'jou0161', 1.0, 0.0)
(-5.5568280616995374, u'kapaciteta', 1.0, 0.0)
(-5.5568280616995374, u'kosova', 1.0, 0.0)
(-5.5568280616995374, u'la', 1.0, 0.0)
(-5.5568280616995374, u'lequio', 1.0, 0.0)
(-5.5568280616995374, u'llevar', 1.0, 0.0)
(-5.5568280616995374, u'lo', 2.0, 0.0)
(-5.5568280616995374, u'misije', 1.0, 0.0)
(-5.5568280616995374, u'muy', 1.0, 0.0)
(-5.5568280616995374, u'mxe1s', 1.0, 0.0)
(-5.5568280616995374, u'na', 1.0, 0.0)
(-5.5568280616995374, u'nada', 1.0, 0.0)
(-5.5568280616995374, u'nasilja', 1.0, 0.0)
(-5.5568280616995374, u'no', 1.0, 0.0)
(-5.5568280616995374, u'obaviti', 1.0, 0.0)
(-5.5568280616995374, u'obeu0107ao', 1.0, 0.0)
(-5.5568280616995374, u'parecer', 1.0, 0.0)
(-5.5568280616995374, u'pone', 1.0, 0.0)
(-5.5568280616995374, u'por', 1.0, 0.0)
(-5.5568280616995374, u'pou0161to', 1.0, 0.0)
(-5.5568280616995374, u'prava', 1.0, 0.0)
(-5.5568280616995374, u'predstavlja', 1.0, 0.0)
(-5.5568280616995374, u'prou0161losedmiu010dnom', 1.0, 0.0)
(-5.5568280616995374, u'relacixf3n', 1.0, 0.0)
(-5.5568280616995374, u'sjeveru', 1.0, 0.0)
(-5.5568280616995374, u'taj', 1.0, 0.0)
(-5.5568280616995374, u'una', 1.0, 0.0)
(-5.5568280616995374, u'visto', 1.0, 0.0)
(-5.5568280616995374, u'vladavine', 1.0, 0.0)
(-5.5568280616995374, u'ya', 1.0, 0.0)
(-5.5568280616995374, u'u0107e', 1.0, 0.0)
(-4.863680881139592, u'aj', 0.0, 1.0)
(-4.863680881139592, u'ajudou', 0.0, 1.0)
(-4.863680881139592, u'alpskxfdmi', 0.0, 1.0)
(-4.863680881139592, u'alpy', 0.0, 1.0)
(-4.863680881139592, u'ao', 0.0, 1.0)
(-4.863680881139592, u'apresenta', 0.0, 1.0)
(-4.863680881139592, u'blxedzko', 0.0, 1.0)
(-4.863680881139592, u'comexe7o', 0.0, 1.0)
(-4.863680881139592, u'da', 2.0, 1.0)
(-4.863680881139592, u'decepcionantes', 0.0, 1.0)
(-4.863680881139592, u'deti', 0.0, 1.0)
(-4.863680881139592, u'dificuldades', 0.0, 1.0)
(-4.863680881139592, u'difxedcil', 1.0, 1.0)
(-4.863680881139592, u'do', 0.0, 1.0)
(-4.863680881139592, u'druh', 0.0, 1.0)
(-4.863680881139592, u'dxe1', 0.0, 1.0)
(-4.863680881139592, u'ela', 0.0, 1.0)
(-4.863680881139592, u'encontrar', 0.0, 1.0)
(-4.863680881139592, u'enfrentar', 0.0, 1.0)
(-4.863680881139592, u'forxe7as', 0.0, 1.0)
(-4.863680881139592, u'furiosa', 0.0, 1.0)
(-4.863680881139592, u'golf', 0.0, 1.0)
(-4.863680881139592, u'golfistami', 0.0, 1.0)
(-4.863680881139592, u'golfovxfdch', 0.0, 1.0)
(-4.863680881139592, u'hotelmi', 0.0, 1.0)
(-4.863680881139592, u'hrau0165', 0.0, 1.0)
(-4.863680881139592, u'ide', 0.0, 1.0)
(-4.863680881139592, u'ihrxedsk', 0.0, 1.0)
(-4.863680881139592, u'intransponxedveis', 0.0, 1.0)
(-4.863680881139592, u'inxedcio', 0.0, 1.0)
(-4.863680881139592, u'inxfd', 0.0, 1.0)
(-4.863680881139592, u'kde', 0.0, 1.0)
(-4.863680881139592, u'kombinxe1cie', 0.0, 1.0)
(-4.863680881139592, u'komplex', 0.0, 1.0)
(-4.863680881139592, u'konu010diarmi', 0.0, 1.0)
(-4.863680881139592, u'lado', 0.0, 1.0)
(-4.863680881139592, u'lete', 0.0, 1.0)
(-4.863680881139592, u'longo', 0.0, 1.0)
(-4.863680881139592, u'lyu017eovau0165', 0.0, 1.0)
(-4.863680881139592, u'manu017eelky', 0.0, 1.0)
(-4.863680881139592, u'mas', 0.0, 1.0)
(-4.863680881139592, u'mesmo', 0.0, 1.0)
(-4.863680881139592, u'meu', 0.0, 1.0)
(-4.863680881139592, u'minha', 0.0, 1.0)
(-4.863680881139592, u'mou017enosu0165ami', 0.0, 1.0)
(-4.863680881139592, u'mxe3e', 0.0, 1.0)
(-4.863680881139592, u'nadu0161enxfdmi', 0.0, 1.0)
(-4.863680881139592, u'negativas', 0.0, 1.0)
(-4.863680881139592, u'nie', 0.0, 1.0)
(-4.863680881139592, u'niekou013ekxfdch', 0.0, 1.0)
(-4.863680881139592, u'para', 0.0, 1.0)
(-4.863680881139592, u'parecem', 0.0, 1.0)
(-4.863680881139592, u'pod', 0.0, 1.0)
(-4.863680881139592, u'ponxfakajxfa', 0.0, 1.0)
(-4.863680881139592, u'potrebujxfa', 0.0, 1.0)
(-4.863680881139592, u'pri', 0.0, 1.0)
(-4.863680881139592, u'provaxe7xf5es', 0.0, 1.0)
(-4.863680881139592, u'punham', 0.0, 1.0)
(-4.863680881139592, u'qual', 0.0, 1.0)
(-4.863680881139592, u'qualquer', 0.0, 1.0)
(-4.863680881139592, u'quem', 0.0, 1.0)
(-4.863680881139592, u'rakxfaske', 0.0, 1.0)
(-4.863680881139592, u'rezortov', 0.0, 1.0)
(-4.863680881139592, u'sa', 0.0, 1.0)
(-4.863680881139592, u'sebe', 0.0, 1.0)
(-4.863680881139592, u'sempre', 0.0, 1.0)
(-4.863680881139592, u'situaxe7xf5es', 0.0, 1.0)
(-4.863680881139592, u'spojenxfdch', 0.0, 1.0)
(-4.863680881139592, u'suplantar', 0.0, 1.0)
(-4.863680881139592, u'sxfa', 0.0, 1.0)
(-4.863680881139592, u'tak', 0.0, 1.0)
(-4.863680881139592, u'talianske', 0.0, 1.0)
(-4.863680881139592, u'teve', 0.0, 1.0)
(-4.863680881139592, u'tive', 0.0, 1.0)
(-4.863680881139592, u'todas', 0.0, 1.0)
(-4.863680881139592, u'trxe1venia', 0.0, 1.0)
(-4.863680881139592, u'veu013ekxfd', 0.0, 1.0)
(-4.863680881139592, u'vida', 0.0, 1.0)
(-4.863680881139592, u'vo', 0.0, 1.0)
(-4.863680881139592, u'vou013enxe9ho', 0.0, 1.0)
(-4.863680881139592, u'vysokxfdmi', 0.0, 1.0)
(-4.863680881139592, u'vyu017eitia', 0.0, 1.0)
(-4.863680881139592, u'vxe4u010du0161ine', 0.0, 1.0)
(-4.863680881139592, u'vu017edy', 0.0, 1.0)
(-4.863680881139592, u'zaujxedmavxe9', 0.0, 1.0)
(-4.863680881139592, u'zime', 0.0, 1.0)
(-4.863680881139592, u'u010dasu', 0.0, 1.0)
(-4.863680881139592, u'u010falu0161xedmi', 0.0, 1.0)
(-4.863680881139592, u'u0161vaju010diarske', 0.0, 1.0)
(-4.4582157730314274, u'de', 2.0, 2.0)
(-4.4582157730314274, u'foi', 0.0, 2.0)
(-4.4582157730314274, u'mais', 0.0, 2.0)
(-4.4582157730314274, u'me', 0.0, 2.0)
(-4.4582157730314274, u'u010di', 0.0, 2.0)
(-4.1705337005796466, u'as', 0.0, 3.0)
(-4.1705337005796466, u'que', 4.0, 3.0)

现在我们看到一些模式......似乎较高的系数有利于一个类而另一条尾巴有利于另一个,所以你可以简单地这样做:

Now we see some patterns... Seems like the higher coefficient favors a class and the other tail favors the other, so you can simply do this:

import codecs, re, time
from itertools import chain

import numpy as np

from sklearn.feature_extraction.text import CountVectorizer
from sklearn.naive_bayes import MultinomialNB

trainfile = 'train.txt'

# Vectorizing data.
train = []
word_vectorizer = CountVectorizer(analyzer='word')
trainset = word_vectorizer.fit_transform(codecs.open(trainfile,'r','utf8'))
tags = ['bs','pt','bs','pt']

# Training NB
mnb = MultinomialNB()
mnb.fit(trainset, tags)

def most_informative_feature_for_binary_classification(vectorizer, classifier, n=10):
    class_labels = classifier.classes_
    feature_names = vectorizer.get_feature_names()
    topn_class1 = sorted(zip(classifier.coef_[0], feature_names))[:n]
    topn_class2 = sorted(zip(classifier.coef_[0], feature_names))[-n:]

    for coef, feat in topn_class1:
        print class_labels[0], coef, feat

    print

    for coef, feat in reversed(topn_class2):
        print class_labels[1], coef, feat


most_informative_feature_for_binary_classification(word_vectorizer, mnb)

[输出]:

bs -5.5568280617 acuerdo
bs -5.5568280617 al
bs -5.5568280617 alex
bs -5.5568280617 algo
bs -5.5568280617 andaba
bs -5.5568280617 andrea
bs -5.5568280617 bien
bs -5.5568280617 buscando
bs -5.5568280617 como
bs -5.5568280617 con

pt -4.17053370058 que
pt -4.17053370058 as
pt -4.45821577303 či
pt -4.45821577303 me
pt -4.45821577303 mais
pt -4.45821577303 foi
pt -4.45821577303 de
pt -4.86368088114 švajčiarske
pt -4.86368088114 ďalšími
pt -4.86368088114 času

实际上,如果您仔细阅读@larsmans 的评论,他会在 如何为 scikit-learn 分类器获取信息量最大的特征?

Actually if you've read @larsmans comment carefully, he gave the hint on the binary classes' coefficient in How to get most informative features for scikit-learn classifiers?

这篇关于如何为不同类别的 scikit-learn 分类器获得最有用的特征?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!

08-13 18:56