我正在尝试了解scikit-learn Ridge中如何实现Ridge回归

Ridge回归具有最小化(y-Xw)^ 2 + \ alpha * | w | ^ 2的闭式解,即(X'* X + \ alpha * I)^ {-1} X'y

拟合模型的截距和系数似乎与封闭式解决方案不同。任何想法如何在scikit-learn中实现ridge回归?

from sklearn import datasets
from sklearn.linear_model import Ridge
import matplotlib.pyplot as plt
import numpy as np

# prepare dataset
boston = datasets.load_boston()
X = boston.data
y = boston.target
# add the w_0 intercept where the corresponding x_0 = 1
Xp = np.concatenate([np.ones((X.shape[0], 1)), X], axis=1)

alpha = 0.5
ridge = Ridge(fit_intercept=True, alpha=alpha)
ridge.fit(X, y)

# 1. intercept and coef of the fit model
print np.array([ridge.intercept_] + list(ridge.coef_))
# output:
# array([  3.34288615e+01,  -1.04941233e-01,   4.70136803e-02,
     2.52527006e-03,   2.61395134e+00,  -1.34372897e+01,
     3.83587282e+00,  -3.09303986e-03,  -1.41150803e+00,
     2.95533512e-01,  -1.26816221e-02,  -9.05375752e-01,
     9.61814775e-03,  -5.30553855e-01])

# 2. the closed form solution
print np.linalg.inv(Xp.T.dot(Xp) + alpha * np.eye(Xp.shape[1])).dot(Xp.T).dot(y)
# output:
# array([  2.17772079e+01,  -1.00258044e-01,   4.76559911e-02,
    -6.63573226e-04,   2.68040479e+00,  -9.55123875e+00,
     4.55214996e+00,  -4.67446118e-03,  -1.25507957e+00,
     2.52066137e-01,  -1.15766049e-02,  -7.26125030e-01,
     1.14804636e-02,  -4.92130481e-01])

最佳答案

棘手的是拦截。您拥有的封闭式解决方案是缺乏拦截功能,当您在数据中附加一列1s时,您还会在拦截项上加上L2惩罚。 Scikit-学习岭回归没有。

如果您希望对偏差施加L2损失,则只需在Xp上调用ridge(并在构造函数中关闭拟合偏差),您将得到:

>>> ridge = Ridge(fit_intercept=False, alpha=alpha)
>>> ridge.fit(Xp, y)
>>> print np.array(list(ridge.coef_))
[  2.17772079e+01  -1.00258044e-01   4.76559911e-02  -6.63573226e-04
   2.68040479e+00  -9.55123875e+00   4.55214996e+00  -4.67446118e-03
  -1.25507957e+00   2.52066137e-01  -1.15766049e-02  -7.26125030e-01
   1.14804636e-02  -4.92130481e-01]

关于machine-learning - 在sci-kit中了解Ridge线性回归学习,我们在Stack Overflow上找到一个类似的问题:https://stackoverflow.com/questions/40557569/

10-12 21:37