本文介绍了如何在Keras中分别优化多个损失函数?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我目前正在尝试在Keras中构建具有三种不同损失函数的深度学习模型.第一个损失函数是典型的均方误差损失.其他两个损失函数是我自己构建的,可以找到由输入图像和输出图像进行的计算之间的差异(此代码是我正在做的简化版本).

I am currently trying to build a deep learning model with three different loss functions in Keras. The first loss function is the typical mean squared error loss. The other two loss functions are the ones I built myself, which finds the difference between a calculation made from the input image and the output image (this code is a simplified version of what I'm doing).

def p_autoencoder_loss(yTrue,yPred):

    def loss(yTrue, y_Pred):
       return K.mean(K.square(yTrue - yPred), axis=-1)

    def a(image):
       return K.mean(K.sin(image))

    def b(image):
       return K.sqrt(K.cos(image))


a_pred = a(yPred)
a_true = a(yTrue)

b_pred = b(yPred)
b_true = b(yTrue)

empirical_loss = (loss(yTrue, yPred))
a_loss = K.mean(K.square(a_true - a_pred))
b_loss = K.mean(K.square(b_true - b_pred))
final_loss = K.mean(empirical_loss + a_loss + b_loss)
return final_loss

但是,当我使用此损失函数进行训练时,它根本无法很好地收敛.我想尝试的是分别最小化这三个损失函数,而不是通过将它们添加到一个损失函数中来最小化.

However, when I train with this loss function, it is simply not converging well. What I want to try is to minimize the three loss functions separately, not together by adding them into one loss function.

我基本上想在这里做第二个选择,但采用Keras形式.我还希望损失函数彼此独立.有没有简单的方法可以做到这一点?

I essentially want to do the second option here Tensorflow: Multiple loss functions vs Multiple training ops but in Keras form. I also want the loss functions to be independent from each other. Is there a simple way to do this?

推荐答案

您的keras模型中可能有3个输出,每个输出都有您指定的损失,然后keras支持对这些损失进行加权.然后,它还会在输出中为您生成最终的综合损失,但是它将进行优化以减少所有这三种损失.请对此保持警惕,但是根据您的数据/问题/损失,您可能会发现它停滞不前,或者如果您在互相竞争中有损失,则可能会变慢.但是,这需要使用功能性API .我不确定这是否真正实现了单独的优化器实例,但是我认为,与我所知道的纯Keras差不多,而不必开始编写更复杂的TF训练方案.

You could have 3 outputs in your keras model, each with your specified loss, and then keras has support for weighting these losses. It will also then generate a final combined loss for you in the output, but it will be optimising to reduce all three losses. Be wary with this though as depending on your data/problem/losses you might find it stalls slightly or is slow if you have losses fighting each other. This however requires use of the functional API. I'm unsure as to whether this actually implements separate optimiser instances, however I think this is as close you will get in pure Keras that i'm aware of without having to start writing more complex TF training regimes.

例如:

loss_out1 = layers.Dense(1, activation='sigmoid', name='loss1')(x)
loss_out2 = layers.Dense(1, activation='sigmoid', name='loss2')(x)
loss_out3 = layers.Dense(1, activation='sigmoid', name='loss3')(x)

model = keras.Model(inputs=[input],
                outputs=[loss1, loss2, loss3])
model.compile(optimizer=keras.optimizers.RMSprop(1e-3),
          loss=['binary_crossentropy', 'categorical_crossentropy', 'custom_loss1],
          loss_weights=[1., 1., 1.])

这将编译一个模型,该模型在上面的(x)末尾具有3个输出.编译时,将输出设置为列表,并将损失和损失权重设置为列表.请注意,当您使用fit()时,您也需要将目标输出作为列表提供三遍,例如[y, y, y],因为您的模型现在具有三个输出.

This should compile a model with 3 outputs at the end from (x) which would be above. When you compile you set the outputs as a list as well as set the losses and loss weights as a list. Note that when you fit() that you'll need to supply your target outputs three times as a list too e.g. [y, y, y] as your model now has three outputs.

我不是Keras专家,但是它相当高级,我不知道使用纯Keras的另一种方法.希望有人可以用更好的解决方案来纠正我!

I'm not a Keras expert, but it's pretty high-level and i'm not aware of another way using pure Keras. Hopefully someone can come correct me with a better solution!

这篇关于如何在Keras中分别优化多个损失函数?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!

10-12 16:00