从 scikit-learn RFE documentation 中,算法依次选择较小的特征集,并且仅保留权重最高的特征。低权重的特征被丢弃,这个过程会不断重复,直到剩余的特征数量与用户指定的匹配(或默认为原始特征数量的一半)。

RFECV docs 表示特征按 RFE 和 KFCV 排名。

我们在 documentation example for RFECV 中显示的代码中有一组 25 个功能:

from sklearn.svm import SVC
from sklearn.cross_validation import StratifiedKFold
from sklearn.feature_selection import RFECV,RFE
from sklearn.datasets import make_classification

# Build a classification task using 3 informative features
X, y = make_classification(n_samples=1000, n_features=25, n_informative=3,
                           n_redundant=2, n_repeated=0, n_classes=8,
                           n_clusters_per_class=1, random_state=0)

# Create the RFE object and compute a cross-validated score.
svc = SVC(kernel="linear")
# The "accuracy" scoring is proportional to the number of correct
# classifications
rfecv = RFECV(estimator=svc, step=1, cv=StratifiedKFold(y, 2),scoring='accuracy')
rfecv.fit(X, y)
rfe = RFE(estimator=svc, step=1)
rfe.fit(X, y)

print('Original number of features is %s' % X.shape[1])
print("RFE final number of features : %d" % rfe.n_features_)
print("RFECV final number of features : %d" % rfecv.n_features_)
print('')

import numpy as np
g_scores = rfecv.grid_scores_
indices = np.argsort(g_scores)[::-1]
print('Printing RFECV results:')
for f in range(X.shape[1]):
    print("%d. Number of features: %d;
                  Grid_Score: %f" % (f + 1, indices[f]+1, g_scores[indices[f]]))

这是我得到的输出:
Original number of features is 25
RFE final number of features : 12
RFECV final number of features : 3

Printing RFECV results:
1. Number of features: 3; Grid_Score: 0.818041
2. Number of features: 4; Grid_Score: 0.816065
3. Number of features: 5; Grid_Score: 0.816053
4. Number of features: 6; Grid_Score: 0.799107
5. Number of features: 7; Grid_Score: 0.797047
6. Number of features: 8; Grid_Score: 0.783034
7. Number of features: 10; Grid_Score: 0.783022
8. Number of features: 9; Grid_Score: 0.781992
9. Number of features: 11; Grid_Score: 0.778028
10. Number of features: 12; Grid_Score: 0.774052
11. Number of features: 14; Grid_Score: 0.762015
12. Number of features: 13; Grid_Score: 0.760075
13. Number of features: 15; Grid_Score: 0.752003
14. Number of features: 16; Grid_Score: 0.750015
15. Number of features: 18; Grid_Score: 0.750003
16. Number of features: 22; Grid_Score: 0.748039
17. Number of features: 17; Grid_Score: 0.746003
18. Number of features: 19; Grid_Score: 0.739105
19. Number of features: 20; Grid_Score: 0.739021
20. Number of features: 21; Grid_Score: 0.738003
21. Number of features: 23; Grid_Score: 0.729068
22. Number of features: 25; Grid_Score: 0.725056
23. Number of features: 24; Grid_Score: 0.725044
24. Number of features: 2; Grid_Score: 0.506952
25. Number of features: 1; Grid_Score: 0.272896

在这个特定的例子中:
  • 对于 RFE:代码总是返回 12 个特征(大约 25 个特征的一半,正如文档中所预期的那样)
  • 对于 RFECV,代码返回 1-25 之间的不同数字(不是特征数量的一半)

  • It seems to me that when RFECV is being selected, the number of features is being picked only based on the KFCV scores - i.e. the cross validation scores are over-riding RFE's successive pruning of features.

    这是真的?如果想使用原生递归特征消除算法,那么 RFECV 是使用这种算法还是使用它的混合版本?

    在 RFECV 中,是否对修剪后剩余的特征子集进行了交叉验证?如果是这样,在 RFECV 中每次修剪后保留了多少特征?

    最佳答案

    在交叉验证的版本中,特征在每一步都被重新排序,排名最低的特征被删除——这在文档中被称为“递归特征选择”。

    如果要将其与原始版本进行比较,则需要计算 RFE 选择的特征的交叉验证分数。我的猜测是 RFECV 的答案是正确的——从特征减少时模型性能的急剧增加来看,您可能有一些高度相关的特征会损害模型的性能。

    关于algorithm - Scikit-Learn RFECV 仅基于网格分数的特征数,我们在Stack Overflow上找到一个类似的问题:https://stackoverflow.com/questions/37054995/

    10-12 21:39
    查看更多