我试图在xgboost中实现一个增强的poisson回归模型,但是我发现结果在低频时有偏差。为了说明这一点,下面是一些我认为可以复制该问题的最简单的python代码:
import numpy as np
import pandas as pd
import xgboost as xgb
def get_preds(mult):
# generate toy dataset for illustration
# 4 observations with linearly increasing frequencies
# the frequencies are scaled by `mult`
dmat = xgb.DMatrix(data=np.array([[0, 0], [0, 1], [1, 0], [1, 1]]),
label=[i*mult for i in [1, 2, 3, 4]],
weight=[1000, 1000, 1000, 1000])
# train a poisson booster on the toy data
bst = xgb.train(
params={"objective": "count:poisson"},
dtrain=dmat,
num_boost_round=100000,
early_stopping_rounds=5,
evals=[(dmat, "train")],
verbose_eval=False)
# return fitted frequencies after reversing scaling
return bst.predict(dmat)/mult
# test multipliers in the range [10**(-8), 10**1]
# display fitted frequencies
mults = [10**i for i in range(-8, 1)]
df = pd.DataFrame(np.round(np.vstack([get_preds(m) for m in mults]), 0))
df.index = mults
df.columns = ["(0, 0)", "(0, 1)", "(1, 0)", "(1, 1)"]
df
# --- result ---
# (0, 0) (0, 1) (1, 0) (1, 1)
#1.000000e-08 11598.0 11598.0 11598.0 11598.0
#1.000000e-07 1161.0 1161.0 1161.0 1161.0
#1.000000e-06 118.0 118.0 118.0 118.0
#1.000000e-05 12.0 12.0 12.0 12.0
#1.000000e-04 2.0 2.0 3.0 3.0
#1.000000e-03 1.0 2.0 3.0 4.0
#1.000000e-02 1.0 2.0 3.0 4.0
#1.000000e-01 1.0 2.0 3.0 4.0
#1.000000e+00 1.0 2.0 3.0 4.0
注意,在低频率下,这些预测似乎会爆炸。这可能与Poisson lambda *有关,体重下降到1以下(事实上增加重量超过1000确实会将“BUBUP”转换成较低的频率),但我仍然期望预测接近平均训练频率(2.5)。此外(上面的例子中没有显示),减少
eta
似乎会增加预测中的偏差量。什么会导致这种情况发生?可用的参数可以减轻效果吗?
最佳答案
经过一番挖掘,我找到了解决办法。在这里记录以防其他人遇到同样的问题。结果我需要加一个偏移项,等于平均频率的(自然)对数。如果这不是很明显的话,那是因为最初的预测是从0.5的频率开始的,并且需要很多提升迭代来将预测重新调整到平均频率。
有关玩具示例的更新,请参见下面的代码。正如我在最初的问题中所建议的,预测现在在较低的尺度上接近平均频率(2.5)。
import numpy as np
import pandas as pd
import xgboost as xgb
def get_preds(mult):
# generate toy dataset for illustration
# 4 observations with linearly increasing frequencies
# the frequencies are scaled by `mult`
dmat = xgb.DMatrix(data=np.array([[0, 0], [0, 1], [1, 0], [1, 1]]),
label=[i*mult for i in [1, 2, 3, 4]],
weight=[1000, 1000, 1000, 1000])
## adding an offset term equal to the log of the mean frequency
offset = np.log(np.mean([i*mult for i in [1, 2, 3, 4]]))
dmat.set_base_margin(np.repeat(offset, 4))
# train a poisson booster on the toy data
bst = xgb.train(
params={"objective": "count:poisson"},
dtrain=dmat,
num_boost_round=100000,
early_stopping_rounds=5,
evals=[(dmat, "train")],
verbose_eval=False)
# return fitted frequencies after reversing scaling
return bst.predict(dmat)/mult
# test multipliers in the range [10**(-8), 10**1]
# display fitted frequencies
mults = [10**i for i in range(-8, 1)]
## round to 1 decimal point to show the result approaches 2.5
df = pd.DataFrame(np.round(np.vstack([get_preds(m) for m in mults]), 1))
df.index = mults
df.columns = ["(0, 0)", "(0, 1)", "(1, 0)", "(1, 1)"]
df
# --- result ---
# (0, 0) (0, 1) (1, 0) (1, 1)
#1.000000e-08 2.5 2.5 2.5 2.5
#1.000000e-07 2.5 2.5 2.5 2.5
#1.000000e-06 2.5 2.5 2.5 2.5
#1.000000e-05 2.5 2.5 2.5 2.5
#1.000000e-04 2.4 2.5 2.5 2.6
#1.000000e-03 1.0 2.0 3.0 4.0
#1.000000e-02 1.0 2.0 3.0 4.0
#1.000000e-01 1.0 2.0 3.0 4.0
#1.000000e+00 1.0 2.0 3.0 4.0