I want to implement an RNN using Tensorflow1.13 on GPU. Following the official recommendation, I write the following code to get a stack of RNN cellslstm = [tk.layers.CuDNNLSTM(128) for _ in range(2)]cells = tk.layers.StackedRNNCells(lstm)However, I receive an error message:How can I correct it? 解决方案 Thanks @qlzh727. Here, I quote the response:Either StackedRNNCells or StackedRNNCells only works with Cell, not layer. The difference between the cell and layer in RNN is that cell will only process one time step within the whole sequence, whereas the layer will process the whole sequence. You can treat RNN layer as:for t in whole_time_steps: output_t, state_t = cell(input_t, state_t-1)If you want to stack 2 LSTM layers to together with cudnn in 1.x, you can do:l1 = tf.layers.CuDNNLSTM(128, return_sequence=True)l2 = tf.layers.CuDNNLSTM(128)l1_output = l1(input)l2_oupput = l2(l1_output)In tf 2.x, we unify the cudnn and normal implementation together, you can just change the example above with tf.layers.LSTM(128, return_sequence=True), which will use the cudnn impl if available. 这篇关于如何在 Tensorflow 中实现堆叠的 RNN?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持! 上岸,阿里云!
09-03 10:02