问题描述
我已经知道"xgboost.XGBRegressor
是XGBoost的Scikit-Learn Wrapper界面."
I already know "xgboost.XGBRegressor
is a Scikit-Learn Wrapper interface for XGBoost."
但是它们还有其他区别吗?
But do they have any other difference?
推荐答案
xgboost.train
是用于通过梯度增强方法训练模型的低级API.
xgboost.train
is the low-level API to train the model via gradient boosting method.
xgboost.XGBRegressor
和xgboost.XGBClassifier
是准备DMatrix
并传递相应目标函数和参数的包装器(如他们称其为"Scikit-Learn-like包装器").最后,fit
调用简单地归结为:
xgboost.XGBRegressor
and xgboost.XGBClassifier
are the wrappers (Scikit-Learn-like wrappers, as they call it) that prepare the DMatrix
and pass in the corresponding objective function and parameters. In the end, the fit
call simply boils down to:
self._Booster = train(params, dmatrix,
self.n_estimators, evals=evals,
early_stopping_rounds=early_stopping_rounds,
evals_result=evals_result, obj=obj, feval=feval,
verbose_eval=verbose)
这意味着 XGBRegressor
和XGBClassifier
可以完成的所有操作都可以通过基础xgboost.train
函数实现.另一种方法显然是不正确的,例如,XGBModel
API不支持某些xgboost.train
有用的参数.显着差异的列表包括:
This means that everything that can be done with XGBRegressor
and XGBClassifier
is doable via underlying xgboost.train
function. The other way around it's obviously not true, for instance, some useful parameters of xgboost.train
are not supported in XGBModel
API. The list of notable differences includes:
-
xgboost.train
允许设置在每次迭代结束时应用的callbacks
. -
xgboost.train
允许通过xgb_model
参数继续训练. -
xgboost.train
不仅允许最小化eval函数,而且还可以最大化.
xgboost.train
allows to set thecallbacks
applied at end of each iteration.xgboost.train
allows training continuation viaxgb_model
parameter.xgboost.train
allows not only minization of the eval function, but maximization as well.
这篇关于xgb.train和xgb.XGBRegressor(或xgb.XGBClassifier)之间有什么区别?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!