问题描述
我正在尝试构建一个在Tensorflow中处理3D数据的LSTM RNN.从这篇论文来看,Grid LSTM RNN可以是n维的.我的网络的想法是使用3D卷[depth, x, y]
,网络应为[depth, x, y, n_hidden]
,其中n_hidden
是LSTM单元递归调用的数量.这个想法是,每个像素都有自己的LSTM递归调用的字符串".
I'm trying to build a LSTM RNN that handles 3D data in Tensorflow. From this paper, Grid LSTM RNN's can be n-dimensional. The idea for my network is a have a 3D volume [depth, x, y]
and the network should be [depth, x, y, n_hidden]
where n_hidden
is the number of LSTM cell recursive calls. The idea is that each pixel gets its own "string" of LSTM recursive calls.
输出应为[depth, x, y, n_classes]
.我正在执行二进制分割-考虑前景和背景,因此类的数量仅为2.
The output should be [depth, x, y, n_classes]
. I'm doing a binary segmentation -- think foreground and background, so the number of classes is just 2.
# Network Parameters
n_depth = 5
n_input_x = 200 # MNIST data input (img shape: 28*28)
n_input_y = 200
n_hidden = 128 # hidden layer num of features
n_classes = 2
# tf Graph input
x = tf.placeholder("float", [None, n_depth, n_input_x, n_input_y])
y = tf.placeholder("float", [None, n_depth, n_input_x, n_input_y, n_classes])
# Define weights
weights = {}
biases = {}
# Initialize weights
for i in xrange(n_depth * n_input_x * n_input_y):
weights[i] = tf.Variable(tf.random_normal([n_hidden, n_classes]))
biases[i] = tf.Variable(tf.random_normal([n_classes]))
def RNN(x, weights, biases):
# Prepare data shape to match `rnn` function requirements
# Current data input shape: (batch_size, n_input_y, n_input_x)
# Permuting batch_size and n_input_y
x = tf.reshape(x, [-1, n_input_y, n_depth * n_input_x])
x = tf.transpose(x, [1, 0, 2])
# Reshaping to (n_input_y*batch_size, n_input_x)
x = tf.reshape(x, [-1, n_input_x * n_depth])
# Split to get a list of 'n_input_y' tensors of shape (batch_size, n_hidden)
# This input shape is required by `rnn` function
x = tf.split(0, n_depth * n_input_x * n_input_y, x)
# Define a lstm cell with tensorflow
lstm_cell = grid_rnn_cell.GridRNNCell(n_hidden, input_dims=[n_depth, n_input_x, n_input_y])
# lstm_cell = rnn_cell.MultiRNNCell([lstm_cell] * 12, state_is_tuple=True)
# lstm_cell = rnn_cell.DropoutWrapper(lstm_cell, output_keep_prob=0.8)
outputs, states = rnn.rnn(lstm_cell, x, dtype=tf.float32)
# Linear activation, using rnn inner loop last output
# pdb.set_trace()
output = []
for i in xrange(n_depth * n_input_x * n_input_y):
#I'll need to do some sort of reshape here on outputs[i]
output.append(tf.matmul(outputs[i], weights[i]) + biases[i])
return output
pred = RNN(x, weights, biases)
pred = tf.transpose(tf.pack(pred),[1,0,2])
pred = tf.reshape(pred, [-1, n_depth, n_input_x, n_input_y, n_classes])
# pdb.set_trace()
temp_pred = tf.reshape(pred, [-1, n_classes])
n_input_y = tf.reshape(y, [-1, n_classes])
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(temp_pred, n_input_y))
当前出现错误:TypeError: unsupported operand type(s) for +: 'int' and 'NoneType'
它在RNN初始化之后出现:outputs, states = rnn.rnn(lstm_cell, x, dtype=tf.float32)
It occurs after the RNN intialization: outputs, states = rnn.rnn(lstm_cell, x, dtype=tf.float32)
x
当然是float32类型的
x
of course is of type float32
我不知道GridRNNCell
返回什么类型,这里有什么帮助吗?这可能是问题所在.我是否应该为此定义更多的论点? input_dims
有意义,但是output_dims
应该是什么?
I am unable to tell what type GridRNNCell
returns, any helpe here? This could be the issue. Should I be defining more arguments to this? input_dims
makes sense, but what should output_dims
be?
这是contrib
代码中的错误吗?
Is this a bug in the contrib
code?
GridRNNCell位于contrib/grid_rnn/python/ops/grid_rnn_cell.py
GridRNNCell is located in contrib/grid_rnn/python/ops/grid_rnn_cell.py
推荐答案
您正在使用哪个版本的Grid LSTM单元?
which version of Grid LSTM cells are you using?
如果您使用的是 https: //github.com/tensorflow/tensorflow/blob/master/tensorflow/contrib/rnn/python/ops/rnn_cell.py
我认为您可以尝试初始化"feature_size"和"frequency_skip".另外,我认为可能存在另一个错误.向此版本中输入动态形状可能会导致TypeError
I think you can try to initialize 'feature_size' and 'frequency_skip'.Also, I think there may exists another bug. Feed a dynamic shape into this version may cause a TypeError
这篇关于Tensorflow网格LSTM RNN TypeError的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!