本文介绍了特征选择和预测的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

from sklearn.feature_selection import RFECV
from sklearn.metrics import accuracy_score
from sklearn.model_selection import cross_val_predict, KFold
from sklearn.preprocessing import StandardScaler
from sklearn.ensemble import RandomForestClassifier
from sklearn.pipeline import Pipeline
from sklearn.datasets import load_iris

我有 X 和 Y 数据.

I have X and Y data.

data = load_iris()    
X = data.data
Y = data.target 

我想使用 k 折验证方法实现 RFECV 特征选择和预测.

I would like to implement RFECV feature selection and prediction with k-fold validation approach.

clf = RandomForestClassifier()

kf = KFold(n_splits=2, shuffle=True, random_state=0)  

estimators = [('standardize' , StandardScaler()),
              ('clf', clf)]

class Mypipeline(Pipeline):
    @property
    def coef_(self):
        return self._final_estimator.coef_
    @property
    def feature_importances_(self):
        return self._final_estimator.feature_importances_ 

pipeline = Mypipeline(estimators)

rfecv = RFECV(estimator=pipeline, cv=kf, scoring='accuracy', verbose=10)
rfecv_data = rfecv.fit(X, Y)

print ('no. of selected features =', rfecv_data.n_features_) 

编辑(用于少量剩余):

X_new = rfecv.transform(X)
print X_new.shape

y_predicts = cross_val_predict(clf, X_new, Y, cv=kf)
accuracy = accuracy_score(Y, y_predicts)
print ('accuracy =', accuracy)

推荐答案

不要将 StandardScaler 和 RFECV 包装在同一管道中,而是对 StandardScaler 和 RandomForestClassifier 执行此操作,并将该管道作为估计器传递给 RFECV.在此不会泄露任何 traininf 信息.

Instead of wrapping StandardScaler and RFECV in a same pipeline, do that for StandardScaler and RandomForestClassifier and pass that pipeline to the RFECV as an estimator. In this no traininf info will be leaked.

estimators = [('standardize' , StandardScaler()),
              ('clf', RandomForestClassifier())]

pipeline = Pipeline(estimators)


rfecv = RFECV(estimator=pipeline, scoring='accuracy')
rfecv_data = rfecv.fit(X, Y)

更新:关于错误'RuntimeError:分类器没有公开coef_"或feature_importances_"属性'

是的,这是 scikit-learn 管道中的一个已知问题.您可以查看我的其他在此处回答更多详细信息,并使用我在那里创建的新管道.

Yes thats a known issue in scikit-learn pipeline. You can look at my other answer here for more details and use the new pipeline I created there.

像这样定义一个自定义管道:

Define a custom pipeline like this:

class Mypipeline(Pipeline):
    @property
    def coef_(self):
        return self._final_estimator.coef_
    @property
    def feature_importances_(self):
        return self._final_estimator.feature_importances_ 

并使用它:

pipeline = Mypipeline(estimators)

rfecv = RFECV(estimator=pipeline, scoring='accuracy')
rfecv_data = rfecv.fit(X, Y)

更新 2:

@brute,对于你的数据和代码,算法在一分钟内在我的电脑上完成.这是我使用的完整代码:

Update 2:

@brute, For your data and code, the algorithms completes within a minute on my PC. This is the complete code I use:

import numpy as np
import glob
from sklearn.utils import resample
files = glob.glob('/home/Downloads/Untitled Folder/*') 
outs = [] 
for fi in files: 
    data = np.genfromtxt(fi, delimiter='|', dtype=float) 
    data = data[~np.isnan(data).any(axis=1)] 
    data = resample(data, replace=False, n_samples=1800, random_state=0) 
    outs.append(data) 

X = np.vstack(outs) 
print X.shape 
Y = np.repeat([1, 2, 3, 4, 5, 6, 7, 8, 9, 10], 1800) 
print Y.shape

#from sklearn.utils import shuffle
#X, Y = shuffle(X, Y, random_state=0)

from sklearn.feature_selection import RFECV
from sklearn.model_selection import KFold
from sklearn.preprocessing import StandardScaler
from sklearn.ensemble import RandomForestClassifier
from sklearn.pipeline import Pipeline

clf = RandomForestClassifier()

kf = KFold(n_splits=10, shuffle=True, random_state=0)  

estimators = [('standardize' , StandardScaler()),
              ('clf', RandomForestClassifier())]

class Mypipeline(Pipeline):
    @property
    def coef_(self):
        return self._final_estimator.coef_
    @property
    def feature_importances_(self):
        return self._final_estimator.feature_importances_ 

pipeline = Mypipeline(estimators)

rfecv = RFECV(estimator=pipeline, scoring='accuracy', verbose=10)
rfecv_data = rfecv.fit(X, Y)

print ('no. of selected features =', rfecv_data.n_features_) 

更新 3:对于 cross_val_predict

Update 3: For cross_val_predict

X_new = rfecv.transform(X)
print X_new.shape

# Here change clf to pipeline, 
# because RFECV has found features according to scaled data,
# which is not present when you pass clf 
y_predicts = cross_val_predict(pipeline, X_new, Y, cv=kf)
accuracy = accuracy_score(Y, y_predicts)
print ('accuracy =', accuracy)

这篇关于特征选择和预测的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!

09-25 07:29