问题描述
我正在构建自动编码器,我想将我的值编码为逻辑矩阵.但是,当我在一个中间层(所有其他层都使用"relu")中使用自定义步骤激活功能时,keras会引发此错误:
I am building auto-encoder and I want to encode my values into a logical matrix. However, when I'm using my custom step activation function in one of the intermediate layers (all other layers are using 'relu'), keras raises this error:
An operation has `None` for gradient.
我尝试使用硬Sigmoid 函数,但它不适合我的问题,因为当我只需要二进制文件时,它仍然会产生中间值.我知道,我的函数在大多数时候都没有梯度,但是是否可以使用其他函数进行梯度计算,而仍然使用阶跃函数进行精度和损耗计算?
I've tried using hard-sigmoid function, but it doesn't fit my problem, because it still produces intermediate values, when I only need binary. I am aware, that at most points my function has no gradient, but is it possible to use some other function for gradient calculation and still use step function for accuracy and loss calculations?
我的激活功能:
def binary_activation(x):
ones = tf.ones(tf.shape(x), dtype=x.dtype.base_dtype)
zeros = tf.zeros(tf.shape(x), dtype=x.dtype.base_dtype)
return keras.backend.switch(x > 0.5, ones, zeros)
我希望能够使用二进制步进激活功能来训练网络,然后将其用作典型的自动编码器.本文中使用的类似于二进制特征图的东西.
I expect to be able to use binary step activation function to train the network and then use it as a typical auto-encoder. Something simmilar to binary feature map used in this paper.
推荐答案
如上所述,您可以使用 tf.custom_gradient 为激活函数定义一个向后传播"的渐变.
As mentioned here, you could use tf.custom_gradient to define a "back-propagatable" gradient for your activation function.
也许是这样的:
@tf.custom_gradient
def binary_activation(x):
ones = tf.ones(tf.shape(x), dtype=x.dtype.base_dtype)
zeros = tf.zeros(tf.shape(x), dtype=x.dtype.base_dtype)
def grad(dy):
return ... # TODO define gradient
return keras.backend.switch(x > 0.5, ones, zeros), grad
这篇关于在Keras中使用自定义步骤激活功能会导致“操作具有“无"梯度.错误.如何解决呢?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!