问题描述
我正尝试在两个映像之间使用SSD作为网络的丢失功能。
I'm trying to use the SSD between two images as loss function for my network.
# h_fc2 is my output layer, y_ is my label image.
ssd = tf.reduce_sum(tf.square(y_ - h_fc2))
train_step = tf.train.GradientDescentOptimizer(0.01).minimize(ssd)
问题是,权重然后发散并且出现错误
Problem is, that the weights then diverge and I get the error
ReluGrad input is not finite. : Tensor had Inf values
为什么?我确实尝试了其他一些方法,例如通过图像大小对ssd进行标准化(没有工作)或将输出值裁剪为1(不再崩溃,但我仍然需要对此进行评估):
Why's that? I did try some other stuff like normalizing the ssd by the image size (did not work) or cropping the output values to 1 (does not crash anymore, but I still need to evaluate this):
ssd_min_1 = tf.reduce_sum(tf.square(y_ - tf.minimum(h_fc2, 1)))
train_step = tf.train.GradientDescentOptimizer(0.01).minimize(ssd_min_1)
我的观察结果是否值得期待?
Are my observations to be expected?
编辑:
@mdaoust建议被证明是正确的。重点是通过批量大小进行标准化。通过使用以下代码,可以独立于批处理大小
@mdaoust suggestions proved to be correct. The main point was normalizing by batch size. This can be done independent of batch size by using this code
squared_diff_image = tf.square(label_image - output_img)
# Sum over all dimensions except the first (the batch-dimension).
ssd_images = tf.reduce_sum(squared_diff_image, [1, 2, 3])
# Take mean ssd over batch.
error_images = tf.reduce_mean(ssd_images)
此更改仅使学习率(至0.0001)是必要的。
With this change, only a slight decrease of the learning rate (to 0.0001) was necessary.
推荐答案
有很多方法可以得出非限定结果
There are a lot of ways you can end up with non-finite results.
但是,如果学习率过高,则优化器(尤其是诸如梯度下降之类的简单优化器)可能会发散。
But optimizers, especially simple ones like gradient descent, can diverge if the learning rate is 'too high'.
您是否尝试过简单地将学习率除以10/100/1000?或通过 pixels * batch_size
进行标准化以获得每个像素的平均误差?
Have you tried simply dividing your learning rate by 10/100/1000? Or normalizing by pixels*batch_size
to get the average error per pixel?
还是?例如,带有默认选项的 tf.train.AdamOptimizer()
。
Or one of the more advanced optimizers? For example tf.train.AdamOptimizer()
with default options.
这篇关于使用两个图像的平方差作为张量流中的损失函数的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!