这是我发现here的稍微修改的代码...

我使用与原始作者相同的逻辑,但仍然无法获得很好的准确性。平均互惠等级接近(我的:52.79,例如:48.04)

cv = CountVectorizer(binary=True, max_df=0.95)
feature_set = cv.fit_transform(df["short_description"])

X_train, X_test, y_train, y_test = train_test_split(
    feature_set, df["category"].values, random_state=2000)

scikit_log_reg = LogisticRegression(
    verbose=1, solver="liblinear", random_state=0, C=5, penalty="l2", max_iter=1000)

model = scikit_log_reg.fit(X_train, y_train)

target = to_categorical(y_test)
y_pred = model.predict_proba(X_test)
label_ranking_average_precision_score(target, y_pred)
>> 0.5279108613021547

model.score(X_test, y_test)
>> 0.38620071684587814


但是笔记本样本(59.80)的准确性与我的代码(38.62)不匹配

示例笔记本中使用的以下功能是否正确返回精度?

def compute_accuracy(eval_items:list):
    correct=0
    total=0

    for item in eval_items:
        true_pred=item[0]
        machine_pred=set(item[1])

        for cat in true_pred:
            if cat in machine_pred:
                correct+=1
                break


    accuracy=correct/float(len(eval_items))
    return accuracy

最佳答案

笔记本代码正在检查实际类别是否在该模型返回的前三名中:

def get_top_k_predictions(model, X_test, k):
    probs = model.predict_proba(X_test)
    best_n = np.argsort(probs, axis=1)[:, -k:]
    preds=[[model.classes_[predicted_cat] for predicted_cat in prediction] for prediction in best_n]
    preds=[item[::-1] for item in preds]
    return preds


如果将代码的评估部分替换为以下内容,则会看到模型返回的前三位精度为0.5980:

...

model = scikit_log_reg.fit(X_train, y_train)

top_preds = get_top_k_predictions(model, X_test, 3)
pred_pairs = list(zip([[v] for v in y_test], top_preds))
print(compute_accuracy(pred_pairs))

# below is a simpler & more Pythonic version of compute_accuracy
print(np.mean([actual in pred for actual, pred in zip(y_test, top_preds)]))

10-07 21:52