如何评估模型的准确性

如何评估模型的准确性

本文介绍了Keras:如何评估模型的准确性(evaluate_generator与predict_generator)?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

对于二进制分类问题,我得到的模型精度与keras evaluate_generator()predict_generator()不同:

I am getting a different model accuracy from keras evaluate_generator() and predict_generator() for a binary classification problem:

def evaluate_model(model, generator, nBatches):
    score = model.evaluate_generator(generator=generator,               # Generator yielding tuples
                                     steps=generator.samples//nBatches, # number of steps (batches of samples) to yield from generator before stopping
                                     max_queue_size=10,                 # maximum size for the generator queue
                                     workers=1,                         # maximum number of processes to spin up when using process based threading
                                     use_multiprocessing=False,         # whether to use process-based threading
                                     verbose=0)
    print("loss: %.3f - acc: %.3f" % (score[0], score[1]))

使用evaluate_generator(),我得到的acc值最高为 0.7 .

With evaluate_generator(), I am getting acc values of up to 0.7.

def evaluate_predcitions(model, generator):
    predictions = model.predict_generator(generator=generator,
                                    steps=generator.samples/nBatches,
                                    max_queue_size=10,
                                    workers=1,
                                    use_multiprocessing=False,
                                    verbose=0)

    # Evaluate predictions
    predictedClass = np.argmax(predictions, axis=1)
    trueClass = generator.classes
    classLabels = list(generator.class_indices.keys())

    # Create confusion matrix
    confusionMatrix = (confusion_matrix(
        y_true=trueClass,                                       # ground truth (correct) target values
        y_pred=predictedClass))                                 # estimated targets as returned by a classifier
    print(confusionMatrix)

使用predict_generator(),我得到的acc值为 0.5 .我正在将acc计算为(TP+TN)/(TP+TN+FP+FN)

With predict_generator(), I am getting acc values of 0.5.I am calculating acc as (TP+TN)/(TP+TN+FP+FN)

  • 我是对的,evaluate_generator()中的acc是基于TP+TN/(TP+TN+FP+FN)的吗?
  • 当我使用相同的数据和生成器时,acc有何不同?
  • Am I right, that acc from evaluate_generator() is based on TP+TN/(TP+TN+FP+FN)?
  • How can acc be different when I use the same data and generator?

推荐答案

要解决此问题(evaluate_generate& amp; _predict_generator精度).您只需要在代码中做三件事:

To solve this issue (evaluate_generate & predict_generator accuracies). You simply need to do three things in your code:

(1)设置

shuffle = False

test_datagen.flow_from_directorytest_datagen.flow_from_dataframe中,

(2)设置

workers = 0

model.predict_generator和(3)更改

trueClass = generator.classes[generator.index_array]

这些更改将使您的程序在主线程上执行,保留索引并与图像ID匹配.然后,两个精度应该是相同的.

These changes will make your program be executed on the main thread, rest the index and match with the image id. Then both accuracies should be the same.

这篇关于Keras:如何评估模型的准确性(evaluate_generator与predict_generator)?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!

08-23 04:36