本文介绍了跳过NaN输入的自定义损失功能的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在构建一个自动编码器,我的数据中包含NaN值.如何创建自定义(MSE)损失函数,该函数在验证数据中遇到NaN时不计算损失?

I am building an autoencoder, my data has NaN values in it. How do I create a custom (MSE) loss function, that does not compute loss if it encounters a NaN in the validation data?

从网络上获得了提示:

def nan_mse(y_actual, y_predicted):
    per_instance = tf.where(tf.is_nan(y_actual),
                            tf.zeros_like(y_actual),
                            tf.square(tf.subtract(y_predicted, y_actual)))
    return tf.reduce_mean(per_instance, axis=0)

但是会损失NaN:

在每个时期之后,当我尝试在回调函数中使用自定义损失函数时:

When I try using the custom loss function in my callback function, after each epoch:

predictions = autoencoder.predict(x_pred)
mae = (nan_mse(x_pred, predictions))

推荐答案

我想,您的损失函数实际上运行良好. nan值可能来自预测.因此,条件tf.is_nan(y_actual)不会将其过滤掉.要过滤出预测的nan,您应该执行以下操作:

I guess, your loss function actually works well. The nan value probably comes from the predictions. Thus the condition tf.is_nan(y_actual) doesn't filter it out.To filter out the prediction's nan you should do as follows:

import tensorflow.compat.v1 as tf
from tensorflow.compat.v1.keras import backend as K
import numpy as np


def nan_mse(y_actual, y_predicted):
    stack = tf.stack((tf.is_nan(y_actual),
                      tf.is_nan(y_predicted)),
                     axis=1)
    is_nans = K.any(stack, axis=1)
    per_instance = tf.where(is_nans,
                            tf.zeros_like(y_actual),
                            tf.square(tf.subtract(y_predicted, y_actual)))
    print(per_instance)
    return tf.reduce_mean(per_instance, axis=0)

print(nan_mse([1.,1.,np.nan,1.,0.], [1.,1.,0.,0.,np.nan]))

退出:

tf.Tensor(0.2, shape=(), dtype=float32)

这篇关于跳过NaN输入的自定义损失功能的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!

08-11 15:24