我在二进制分类问题上训练了xgboost分类器。它产生70%的准确预测。但是对数损失很大,为9.13。我怀疑这可能是因为一些预测离目标很远,但是我不明白为什么会发生-其他人在使用xgboost的相同数据上报告了更好的logloss(0.55-0.6)。

from readCsv import x_train, y_train
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score, log_loss
from xgboost import XGBClassifier

seed=7
test_size=0.09

X_train, X_test, y_train, y_test = train_test_split(
    x_train, y_train, test_size=test_size, random_state=seed)

# fit model no training data
model = XGBClassifier(max_depth=5,
                      learning_rate=0.02,
                      objective= 'binary:logistic',
                      n_estimators = 5000)
model.fit(X_train, y_train)

# make predictions for test data
y_pred = model.predict(X_test)
predictions = [round(value) for value in y_pred]

accuracy = accuracy_score(y_test, predictions)
print("Accuracy: %.2f%%" % (accuracy * 100.0))

ll = log_loss(y_test, y_pred)
print("Log_loss: %f" % ll)
print(model)


产生以下输出:

Accuracy: 73.54%
Log_loss: 9.139162
XGBClassifier(base_score=0.5, colsample_bylevel=1, colsample_bytree=1,
       gamma=0, learning_rate=0.02, max_delta_step=0, max_depth=5,
       min_child_weight=1, missing=None, n_estimators=5000, nthread=-1,
       objective='binary:logistic', reg_alpha=0, reg_lambda=1,
       scale_pos_weight=1, seed=0, silent=True, subsample=1)


有人知道我的高亏损水平的原因吗?谢谢!

最佳答案

解决方案:使用model.predict_proba(),而不是model.predict()

将对数损失从7+降低到0.52,这在预期范围内。 model.predict()输出的数值非常大,例如1e18,看来它需要通过一些函数才能使其成为有效的概率得分(介于0和1之间)。

关于machine-learning - xgboost:尽管有合理的准确性,但仍存在巨大的对数损失,我们在Stack Overflow上找到一个类似的问题:https://stackoverflow.com/questions/44661008/

10-12 22:49