score和gridsearchCV的k折交叉验证分数是否有偏差

score和gridsearchCV的k折交叉验证分数是否有偏差

本文介绍了如果我们在管道中包含转换器,来自scikit-learn的cross_val_score和gridsearchCV的k折交叉验证分数是否有偏差?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

应该使用诸如StandardScaler之类的数据预处理器来fit_transform训练集,并且仅转换(不适合)测试集.我希望相同的拟合/转换过程适用于交叉验证以调整模型.但是,我发现cross_val_scoreGridSearchCV用预处理器拟合了整个火车集合(而不是fit_transform inner_train集合,并变换了inner_validation集合).我相信这可以人为地从inner_validation集合中消除差异,从而使cv得分(用于通过GridSearch选择最佳模型的指标)产生偏差.这是一个问题还是我实际上错过了任何事情?

Data pre-processers such as StandardScaler should be used to fit_transform the train set and only transform (not fit) the test set. I expect the same fit/transform process applies to cross-validation for tuning the model. However, I found cross_val_score and GridSearchCV fit_transform the entire train set with the preprocessor (rather than fit_transform the inner_train set, and transform the inner_validation set). I believe this artificially removes the variance from the inner_validation set which makes the cv score (the metric used to select the best model by GridSearch) biased. Is this a concern or did I actually miss anything?

为证明上述问题,我使用Kaggle的乳腺癌威斯康星州(诊断)数据集尝试了以下三个简单的测试案例.

To demonstrate the above issue, I tried the following three simple test cases with the Breast Cancer Wisconsin (Diagnostic) Data Set from Kaggle.

  1. 我故意用StandardScaler()拟合并转换整个X.
  1. I intentionally fit and transform the entire X with StandardScaler()
X_sc = StandardScaler().fit_transform(X)
lr = LogisticRegression(penalty='l2', random_state=42)
cross_val_score(lr, X_sc, y, cv=5)
  1. 我在Pipeline中包含SC和LR并运行cross_val_score
  1. I include SC and LR in the Pipeline and run cross_val_score
pipe = Pipeline([
    ('sc', StandardScaler()),
    ('lr', LogisticRegression(penalty='l2', random_state=42))
])
cross_val_score(pipe, X, y, cv=5)
  1. 与2相同,但具有GridSearchCV
pipe = Pipeline([
    ('sc', StandardScaler()),
    ('lr', LogisticRegression(random_state=42))
])
params = {
    'lr__penalty': ['l2']
}
gs=GridSearchCV(pipe,
param_grid=params, cv=5).fit(X, y)
gs.cv_results_

它们都产生相同的验证分数.[0.9826087、0.97391304、0.97345133、0.97345133、0.99115044]

They all produce the same validation scores.[0.9826087 , 0.97391304, 0.97345133, 0.97345133, 0.99115044]

推荐答案

不,sklearn不能对整个数据集进行fit_transform.

No, sklearn doesn't do fit_transform with entire dataset.

要对此进行检查,我将StandardScaler子类化以打印发送给它的数据集的大小.

To check this, I subclassed StandardScaler to print the size of the dataset sent to it.

class StScaler(StandardScaler):
    def fit_transform(self,X,y=None):
        print(len(X))
        return super().fit_transform(X,y)

如果现在在代码中替换StandardScaler,您会看到在第一种情况下传递的数据集大小实际上更大.

If you now replace StandardScaler in your code, you'll see dataset size passed in first case is actually bigger.

但是为什么精度仍然完全相同?我认为这是因为LogisticRegression对功能范围不是很敏感.如果改为使用对比例非常敏感的分类器(例如KNeighborsClassifier),您会发现两种情况之间的准确性开始有所不同.

But why does the accuracy remain exactly same? I think this is because LogisticRegression is not very sensitive to feature scale. If we instead use a classifier that is very sensitive to scale, like KNeighborsClassifier for example, you'll find accuracy between two cases start to vary.

X,y = load_breast_cancer(return_X_y=True)
X_sc = StScaler().fit_transform(X)
lr = KNeighborsClassifier(n_neighbors=1)
cross_val_score(lr, X_sc,y, cv=5)

输出:

569
[0.94782609 0.96521739 0.97345133 0.92920354 0.9380531 ]

第二种情况,

pipe = Pipeline([
    ('sc', StScaler()),
    ('lr', KNeighborsClassifier(n_neighbors=1))
])
print(cross_val_score(pipe, X, y, cv=5))

输出:

454
454
456
456
456
[0.95652174 0.97391304 0.97345133 0.92920354 0.9380531 ]

在准确性方面变化不大,但是仍然需要改变.

Not big change accuracy-wise, but change nonetheless.

这篇关于如果我们在管道中包含转换器,来自scikit-learn的cross_val_score和gridsearchCV的k折交叉验证分数是否有偏差?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!

07-31 15:09