问题描述
我想在喀拉拉邦做以下损失函数:
I want to make following loss function in keras:
Loss = mse + double_derivative(y_pred,x_train)
我无法合并派生词.我已经尝试过K.gradients(K.gradients(y_pred,x_train),x_train)
,但是它没有帮助.
I am not able to incorporate the derivative term. I have tried K.gradients(K.gradients(y_pred,x_train),x_train)
but it does not help.
我收到错误消息:
def _loss_tensor(y_true, y_pred,x_train):
l1 = K.mean(K.square(y_true - y_pred), axis=-1)
sigma = 0.01
lamda = 3
term = K.square(sigma)*K.gradients(K.gradients(y_pred,x_train),x_train)
l2 = K.mean(lamda*K.square(term),axis=-1)
return l1+l2
def loss_func(x_train):
def loss(y_true,y_pred):
return _loss_tensor(y_true,y_pred,x_train)
return loss
def create_model_neural(learning_rate, num_layers,
num_nodes, activation):
model_neural = Sequential()
x_train = model_neural.add(Dense(num_nod, input_dim=num_input, activation=activation))
for i in range(num_layers-1):
model_neural.add(Dense(num_nodes,activation=activation,name=name))
model_neural.add(Dense(1, activation=activation))
optimizer = SGD(lr=learning_rate)
model_loss = loss_func(x_train=x_train)
model_neural.compile(loss=model_loss,optimizer=optimizer)
return model_neural
推荐答案
问题是x_train
始终为None
,而keras无法使用派生wrt None
.发生这种情况是因为model_neural.add(...)
不返回任何内容.
The problem is that x_train
is always None
and keras can't take a derivative wrt None
. And this is happening because model_neural.add(...)
does not return anything.
我假设x_train
是传递到网络的输入.在这种情况下,x_train
可能应该是create_model_neural
的另一个参数,或者您可以尝试使用model_neural.input
张量.
I assume that x_train
is the input that is passed to the network. In this case x_train
should probably be another argument of create_model_neural
or alternatively you can try model_neural.input
tensor.
这篇关于Keras中损失函数的导数的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!