权衡真阳性与真阴性

权衡真阳性与真阴性

本文介绍了权衡真阳性与真阴性的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

tensorflow中的损失函数用作损失函数keras/tensorflow中权衡二进制决策

This loss function in tensorflow is used as a loss function in keras/tensorflow to weight binary decisions

权衡误报与误报:

目标* -log(sigmoid(logits))+ (1-目标)* -log(1-Sigmoid(logits))

targets * -log(sigmoid(logits)) + (1 - targets) * -log(1 - sigmoid(logits))

参数pos_weight用作正目标的乘数:

The argument pos_weight is used as a multiplier for the positive targets:

目标* -log(sigmoid(logits))* pos_weight + (1-目标)* -log(1-Sigmoid(logits))

targets * -log(sigmoid(logits)) * pos_weight + (1 - targets) * -log(1 - sigmoid(logits))

是否有人提出建议,如果损失/回报不应具有相等的权重,那么如何将真实的正数与真实的负数进行权衡呢?

Does anybody have any suggestions how in addition true positives could be weighted against true negatives if the loss/reward of them should not have an equal weight?

推荐答案

首先,请注意,由于交叉熵损失,每个示例(即使分类正确)也会受到一定程度的惩罚(可能非常小).例如,如果正确的类别是1,而我们的logit是10,则罚款将是

First, note that with cross entropy loss, there is some (possibly very very small) penalty to each example (even if correctly classified). For example, if the correct class is 1 and our logit is 10, the penalty will be

-log(sigmoid(10)) = 4*1e-5

这种损失(非常轻微)使网络产生更高的logit,使这种情况的S形更加接近1.类似地,对于负数类,即使logit为-10,损失也会使它变为甚至更加消极.

This loss (very slightly) pushes the network to produce even higher logit for this case to get its sigmoid even closer to 1. Similarly, for negative class, even if the logit is -10, the loss will push it to be even more negative.

通常很好,因为此类条款造成的损失很小.如果您希望网络实际实现零损失,则可以使用label_smoothing.在最小化损失的经典设置中,这可能与奖励"网络非常接近(显然,您可以通过在损失中添加一些负数来奖励"网络.但不能更改梯度和训练行为.

This is usually fine because the loss from such terms is very small. If you would like your network to actually achieve zero loss, you can use label_smoothing. This is probably as close to "rewarding" the network as you can get in the classic setup of minimizing loss (you can obviously "reward" the network by adding some negative number to the loss. That won't change the gradient and training behavior though).

话虽如此,您可以针对各种情况对网络进行不同的惩罚-tp,tn,fp,fn-与. (似乎实现实际上是不正确的.您想使用weight_tensor的相应元素来加权各个log(sigmoid(...))项,而不是cross_entropy的最终输出).

Having said that, you can penalize the network differently for various cases - tp, tn, fp, fn - similarly to what is described in Weight samples if incorrect guessed in binary cross entropy. (It seems like the implementation there is actually incorrect. You want to use corresponding elements of the weight_tensor to weight individual log(sigmoid(...)) terms, not the final output of cross_entropy).

使用此方案,您可能想要对非常错误的答案进行惩罚,而不是对几乎正确的答案进行惩罚.但是,请注意,由于log(sigmoid(...))的形状,这种情况已经在某种程度上发生了.

Using this scheme, you might want to penalize very wrong answers much more than almost right answers. However, note that this is already happening to a degree because of the shape of log(sigmoid(...)).

这篇关于权衡真阳性与真阴性的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!

07-22 17:07