问题描述
我正在尝试使用自定义损失函数,该函数取决于模型没有的一些参数.
I am trying to use a custom loss-function which depends on some arguments that the model does not have.
该模型具有两个输入(mel_specs
和pred_inp
),并且期望用于训练的labels
张量:
The model has two inputs (mel_specs
and pred_inp
) and expects a labels
tensor for training:
def to_keras_example(example):
# Preparing inputs
return (mel_specs, pred_inp), labels
# Is a tf.train.Dataset for model.fit(train_data, ...)
train_data = load_dataset(fp, 'train).map(to_keras_example).repeat()
在损失函数中,我需要计算mel_specs
和pred_inp
的长度.这意味着我的损失看起来像这样:
In my loss function I need to calculate the lengths of mel_specs
and pred_inp
. This means my loss looks like this:
def rnnt_loss_wrapper(y_true, y_pred, mel_specs_inputs_):
input_lengths = get_padded_length(mel_specs_inputs_[:, :, 0])
label_lengths = get_padded_length(y_true)
return rnnt_loss(
acts=y_pred,
labels=tf.cast(y_true, dtype=tf.int32),
input_lengths=input_lengths,
label_lengths=label_lengths
)
但是,无论我选择哪种方法,我都面临一些问题.
However, no matter which approach I choose, I am facing some issue.
如果我实际上包装了损失函数s.t.它会返回一个使用y_true
和y_pred
的函数,如下所示:
If I actually wrap the loss function s.t. it returns a function which takes y_true
and y_pred
like this:
def rnnt_loss_wrapper(mel_specs_inputs_):
def inner_(y_true, y_pred):
input_lengths = get_padded_length(mel_specs_inputs_[:, :, 0])
label_lengths = get_padded_length(y_true)
return rnnt_loss(
acts=y_pred,
labels=tf.cast(y_true, dtype=tf.int32),
input_lengths=input_lengths,
label_lengths=label_lengths
)
return inner_
model = create_model(hparams)
model.compile(
optimizer=optimizer,
loss=rnnt_loss_wrapper(model.inputs[0]
)
在这里,我在调用model.fit()
后得到了_SymbolicException
:
Here I get a _SymbolicException
after calling model.fit()
:
tensorflow.python.eager.core._SymbolicException: Inputs to eager execution function cannot be Keras symbolic tensors, but found [...]
选项2)使用model.add_loss()
add_loss()
状态的文档:
Option 2) Using model.add_loss()
The documentation of add_loss()
states:
[Adds a..] loss tensor(s), potentially dependent on layer inputs.
..
Arguments:
losses: Loss tensor, or list/tuple of tensors. Rather than tensors, losses
may also be zero-argument callables which create a loss tensor.
inputs: Ignored when executing eagerly. If anything ...
所以我尝试执行以下操作:
So I tried to do the following:
def rnnt_loss_wrapper(y_true, y_pred, mel_specs_inputs_):
input_lengths = get_padded_length(mel_specs_inputs_[:, :, 0])
label_lengths = get_padded_length(y_true)
return rnnt_loss(
acts=y_pred,
labels=tf.cast(y_true, dtype=tf.int32),
input_lengths=input_lengths,
label_lengths=label_lengths
)
model = create_model(hparams)
model.add_loss(
rnnt_loss_wrapper(
y_true=model.inputs[2],
y_pred=model.outputs[0],
mel_specs_inputs_=model.inputs[0],
),
inputs=True
)
model.compile(
optimizer=optimizer
)
但是,调用model.fit()
会引发ValueError
:
ValueError: No gradients provided for any variable: [...]
以上任何选项都应该起作用吗?
Is any of the above options supposed to work?
推荐答案
使用lambda函数是否起作用? ( https://www.w3schools.com/python/python_lambda.asp )
Did using lambda function work? (https://www.w3schools.com/python/python_lambda.asp)
loss = lambda x1, x2: rnnt_loss(x1, x2, acts, labels, input_lengths,
label_lengths, blank_label=0)
这样,您的损失函数应该是接受参数x1
和x2
的函数,但是rnnt_loss也可以知道acts
,labels
,input_lengths
,label_lengths
和blank_label
In this way your loss function should be a function accepting parameters x1
and x2
, but rnnt_loss can also be aware of acts
, labels
, input_lengths
, label_lengths
and blank_label
这篇关于如何在损失函数中使用模型输入?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!