我对specificity_at_sensitivity特别感兴趣。透过the Keras docs

from keras import metrics

model.compile(loss='mean_squared_error',
              optimizer='sgd',
              metrics=[metrics.mae, metrics.categorical_accuracy])

但看起来metrics列表必须具有arity 2的函数,接受(y_true, y_pred)并返回单个张量值。
编辑:目前我是这样做的:
from sklearn.metrics import confusion_matrix

predictions = model.predict(x_test)
y_test = np.argmax(y_test, axis=-1)
predictions = np.argmax(predictions, axis=-1)
c = confusion_matrix(y_test, predictions)
print('Confusion matrix:\n', c)
print('sensitivity', c[0, 0] / (c[0, 1] + c[0, 0]))
print('specificity', c[1, 1] / (c[1, 1] + c[1, 0]))

这种方法的缺点是,我只在训练结束时才得到我关心的输出。希望每10个阶段左右得到一个度量。

最佳答案

我在github上发现了一个相关问题,似乎Keras模型仍然不支持tf.metrics。但是,如果您对使用tf.metrics.specificity_at_sensitivity非常感兴趣,我建议您采取以下解决方法(灵感来自BogdanRuzh's解决方案):

def specificity_at_sensitivity(sensitivity, **kwargs):
    def metric(labels, predictions):
        # any tensorflow metric
        value, update_op = tf.metrics.specificity_at_sensitivity(labels, predictions, sensitivity, **kwargs)

        # find all variables created for this metric
        metric_vars = [i for i in tf.local_variables() if 'specificity_at_sensitivity' in i.name.split('/')[2]]

        # Add metric variables to GLOBAL_VARIABLES collection.
        # They will be initialized for new session.
        for v in metric_vars:
            tf.add_to_collection(tf.GraphKeys.GLOBAL_VARIABLES, v)

        # force to update metric values
        with tf.control_dependencies([update_op]):
            value = tf.identity(value)
            return value
    return metric


model.compile(loss='mean_squared_error',
              optimizer='sgd',
              metrics=[metrics.mae,
                       metrics.categorical_accuracy,
                       specificity_at_sensitivity(0.5)])

更新:
您可以在培训后使用model.evaluate检索度量。

关于python - 在Keras中使用tf.metrics?,我们在Stack Overflow上找到一个类似的问题:https://stackoverflow.com/questions/50539213/

10-12 18:02