如何使用tf.unstack在Tensorflow中将输入提供给LSTM rnn:
因此,如果我输入的形状是[4,5,2]
,即[batch_size , time_stamp , n_input]
现在,如果我尝试:
dataet=[[[3, 5], [7, 2], [7, 6]],
[[2, 5], [1, 3], [4, 3]],
[[8, 1], [1, 8], [9, 3]],
[[1, 5], [6, 7], [4, 9]]]
import tensorflow as tf
from tensorflow.contrib import rnn
import numpy as np
input_x=tf.placeholder(dtype=tf.int32,shape=[4,3,2])
input_x=tf.cast(input_x,tf.float32)
data=tf.unstack(input_x,3,axis=1)
with tf.variable_scope('encoder') as scope:
cell=rnn.LSTMCell(num_units=250)
model=tf.nn.bidirectional_dynamic_rnn(cell,cell,inputs=data,dtype=tf.float32)
output,(fs,fc)=model
with tf.Session() as sess:
unstack_output,output_n=sess.run([output,data],feed_dict={input_x:dataet})
print(unstack_output,output_n)
我收到错误消息:
/anaconda/lib/python3.5/importlib/_bootstrap.py:222: RuntimeWarning: compiletime version 3.6 of module 'tensorflow.python.framework.fast_tensor_util' does not match runtime version 3.5
return f(*args, **kwds)
Traceback (most recent call last):
File "/Users/exepaul/Desktop/limit_exceed/nad.py", line 25, in <module>
model=tf.nn.bidirectional_dynamic_rnn(cell,cell,inputs=data,dtype=tf.float32)
File "/anaconda/lib/python3.5/site-packages/tensorflow/python/ops/rnn.py", line 416, in bidirectional_dynamic_rnn
time_major=time_major, scope=fw_scope)
File "/anaconda/lib/python3.5/site-packages/tensorflow/python/ops/rnn.py", line 632, in dynamic_rnn
dtype=dtype)
File "/anaconda/lib/python3.5/site-packages/tensorflow/python/ops/rnn.py", line 695, in _dynamic_rnn_loop
for input_ in flat_input)
File "/anaconda/lib/python3.5/site-packages/tensorflow/python/ops/rnn.py", line 695, in <genexpr>
for input_ in flat_input)
File "/anaconda/lib/python3.5/site-packages/tensorflow/python/framework/tensor_shape.py", line 673, in with_rank_at_least
raise ValueError("Shape %s must have rank at least %d" % (self, rank))
ValueError: Shape (2, 4) must have rank at least 3
tf.unstack之后如何调整RNN的输入?
我试图检查this,但没有答案
我的设定
Tensorflow : 1.6.0
Python 3.5.4 |Anaconda custom (x86_64)|
Osx 10.12.4
最佳答案
为什么要对输入数据进行unstack
?
RNN的输入对于[batch_size, max_time, n_input]
(默认值)应为形状time_major == False
的张量,对于[max_time, batch_size, n_input]
应为time_major == True
形状的张量。
只需传递输入而无需unstack
操作即可解决问题。