问题描述
我想在 TensorFlow 中使用 max_pool_with_argmax
操作,但出现以下错误:
I wanna use max_pool_with_argmax
operation in TensorFlow, but I got following error:
LookupError: 没有为操作 'MaxPoolWithArgmax_1' 定义梯度(操作类型:MaxPoolWithArgmax)
这是我使用的一段代码max_pool_with_argmax
:
Here is a piece of my code that I am using max_pool_with_argmax
:
BN_relu13 = tf.nn.relu(tf.nn.batch_normalization(h_conv13,batch_mean13,batch_var13,tf.Variable(tf.zeros([64])),tf.Variable(tf.ones([64])),epsilon))
#max pooling
h_pool1, argmax_1 = max_pool_2x2(BN_relu13)
这里是 max_pool_2x2
模块:
def max_pool_2x2(x):
return tf.nn.max_pool_with_argmax(x, ksize=[1, 2, 2, 1],strides=[1, 2, 2, 1], padding='SAME')
在我建立我的网络之后,我使用 AdamOptimizer
来训练我的网络.
and after I built my network I am using AdamOptimizer
to train my network.
顺便说一下,当我使用 tf.nn.max_pool
(而不是 tf.nn.max_pool_with_argmax
)时,一切正常.
By the way, when I am using tf.nn.max_pool
(instead of tf.nn.max_pool_with_argmax
) everything is working fine.
我在 GPU 上运行我的代码,我在 ubuntu14 上使用 python 2.7.
I am running my code on GPU and I am using python 2.7 on ubuntu14.
谢谢,阿里
推荐答案
梯度 op 实际实现了 此处,但不知何故未注册 此处.值得在其上提交 github 问题.
The gradient op is actually implemented here, but somehow not registered here. Would be worth filing a github issue on it.
同时,您可以按照此教程自行注册 op只有梯度配准部分).
At the same time, you can register the op by yourself following this tutorial (only the gradient registration part).
这篇关于在张量流中没有为操作 MaxPoolWithArgmax 定义梯度的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!