问题描述
如何将G_h1 = tf.nn.relu(tf.matmul(z, G_W1) + G_b1)
更改为泄漏的relu?我尝试使用max(value, 0,01*value)
遍历张量,但得到TypeError: Using a tf.Tensor as a Python bool is not allowed.
How can I change G_h1 = tf.nn.relu(tf.matmul(z, G_W1) + G_b1)
to leaky relu? I have tried looping over the tensor using max(value, 0,01*value)
but I get TypeError: Using a tf.Tensor as a Python bool is not allowed.
我还尝试在Tensorflow github上的relu上找到源代码,以便可以将其修改为泄漏的relu,但是我找不到它..
I also tried to find the source code on relu on Tensorflow github so that I can modify it to leaky relu but I couldn't find it..
推荐答案
您可以根据tf.relu
编写一个,例如:
You could write one based on tf.relu
, something like:
def lrelu(x, alpha):
return tf.nn.relu(x) - alpha * tf.nn.relu(-x)
编辑
Tensorflow 1.4现在具有本机 tf.nn.leaky_relu
.
Tensorflow 1.4 now has a native tf.nn.leaky_relu
.
这篇关于在Tensorflow中使用泄漏的relu的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!