本文介绍了如何使用pytorch从Resnet获取概率?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在微调具有多个标签的数据集上的resnet。

I am finetuning resnet on my dataset which has multiple labels.

我想将分类层的得分转换为概率,并使用这些概率来计算训练时的损失。

I would like to convert the 'scores' of the classification layer to probabilities and use those probabilities to calculate the loss at the training.

您能为此提供示例代码吗?
我可以这样使用吗:

Could you give an example code for this?Can I use like this:

       P = net.forward(x)
       p = torch.nn.functional.softmax(P, dim=1)
       loss = torch.nn.functional.cross_entropy(P, y)

我不清楚这是否正确,因为我正在传递概率作为交叉熵损失的输入。

I am unclear whether this is the correct way or not as I am passing probabilities as the input to crossentropy loss.

推荐答案

因此,您正在训练模型,即在pytorch中具有交叉熵的resnet。您的损失计算将如下所示。

So, you are training a model i.e resnet with cross-entropy in pytorch. Your loss calculation would look like this.

logit = model(x)
loss = torch.nn.functional.cross_entropy(logits=logit, target=y)

在这种情况下,您可以计算所有概率

In this case, you can calculate the probabilities of all classes by doing,

logit = model(x)
p = torch.nn.functional.softmax(logit, dim=1)
# to calculate loss using probabilities you can do below 
loss = torch.nn.functional.nll_loss(torch.log(p), y)

请注意,如果您使用概率,则必须手动记录 log ,这很糟糕由于数字原因。而是使用 log_softmax cross_entropy ,在这种情况下,您可能最终会使用交叉熵和概率计算来计算损失。

Note that if you use probabilities you will have to manually take a log, which is bad for numerical reasons. Instead, either use log_softmax or cross_entropy in which case you may end up computing losses using cross entropy and computing probability separately.

这篇关于如何使用pytorch从Resnet获取概率?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!

09-25 07:21