问题描述
我正在尝试将深度学习应用于目标类别(500k,31k)之间类别高度不平衡的二进制分类问题。我想编写一个自定义损失函数,该函数应类似于:
minimum(100-((predicted_smallerclass)/(total_smallerclass))* 100)
I am trying to apply deep learning for a binary classification problem with high class imbalance between target classes (500k, 31K). I want to write a custom loss function which should be like:minimize(100-((predicted_smallerclass)/(total_smallerclass))*100)
感谢任何有关如何构建此逻辑的指针。
Appreciate any pointers on how I can build this logic.
推荐答案
您可以通过乘以logit将类权重添加到损失函数中。
正态交叉熵损失是这样的:
You can add class weights to the loss function, by multiplying logits.Regular cross entropy loss is this:
loss(x, class) = -log(exp(x[class]) / (\sum_j exp(x[j])))
= -x[class] + log(\sum_j exp(x[j]))
在加权情况下:
loss(x, class) = weights[class] * -x[class] + log(\sum_j exp(weights[class] * x[j]))
因此,通过乘以logit,您可以按类别权重重新调整每个类别的预测。
So by multiplying logits, you are re-scaling predictions of each class by its class weight.
例如:
ratio = 31.0 / (500.0 + 31.0)
class_weight = tf.constant([ratio, 1.0 - ratio])
logits = ... # shape [batch_size, 2]
weighted_logits = tf.mul(logits, class_weight) # shape [batch_size, 2]
xent = tf.nn.softmax_cross_entropy_with_logits(
weighted_logits, labels, name="xent_raw")
现在有一个标准损失函数支持每批重量:
There is a standard losses function now that supports weights per batch:
tf.losses.sparse_softmax_cross_entropy(labels=label, logits=logits, weights=weights)
权重应该从类转换的地方权重等于每个示例的权重(形状为[batch_size])。请参见。
Where weights should be transformed from class weights to a weight per example (with shape [batch_size]). See documentation here.
这篇关于张量流中类不平衡二元分类器的损失函数的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!