将GPR拟合到我的数据上需要花费几个小时,因此,我想重用预先训练的GausianProcessRegressor
我想我找到了解决方法,它似乎产生了相同的结果,但是我想知道是否有更好的解决方案,因为这有点像黑客。
kernel = ConstantKernel(0.25, (1e-3, 1e3)) * RBF(hyper_params_rbf, (1e-3, 1e4)) + WhiteKernel(0.0002, (1e-23, 1e3))
gp = GaussianProcessRegressor(kernel=kernel, n_restarts_optimizer=30)
#normalize the data
train = False
if train:
print('Fitting')
gp.fit(X, y)
else:
gp.kernel_= kernel
gp.X_train_ = X
gp.y_train_ = y
gp._y_train_mean = np.zeros(1) #unuse, as Y is not normalized in Regressor
# Precompute quantities required for predictions which are independent of actual query points
K = gp.kernel_(gp.X_train_)
K[np.diag_indices_from(K)] += gp.alpha
gp.L_ = cholesky(K, lower=True)
gp.alpha_ = cho_solve((gp.L_, True), gp.y_train_)
y_pred, sigma = gp.predict(x, return_std=True)
最佳答案
您应该使用GaussianProcessRegressor
或pickle
库序列化joblib
模型。
from sklearn.externals import joblib
if train:
print('Fitting')
gp.fit(X, y)
joblib.dump(gp, 'filename.pkl')
else:
gp = joblib.load('filename.pkl')
请参阅scikit-learn here的帮助
关于python - Sklearn:使用预训练的超参数高斯过程回归,我们在Stack Overflow上找到一个类似的问题:https://stackoverflow.com/questions/49817982/