了解一个简单的LSTM

了解一个简单的LSTM

本文介绍了了解一个简单的LSTM pytorch的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

import torch,ipdb
import torch.autograd as autograd
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from torch.autograd import Variable

rnn = nn.LSTM(input_size=10, hidden_size=20, num_layers=2)
input = Variable(torch.randn(5, 3, 10))
h0 = Variable(torch.randn(2, 3, 20))
c0 = Variable(torch.randn(2, 3, 20))
output, hn = rnn(input, (h0, c0))

这是来自文档的LSTM示例.我不了解以下内容:

This is the LSTM example from the docs. I don't know understand the following things:

  1. 什么是输出尺寸,为什么在任何地方都没有指定?
  2. 为什么输入有3个维度. 5和3代表什么?
  3. h0和c0中的2和3是什么,分别代表什么?
  1. What is output-size and why is it not specified anywhere?
  2. Why does the input have 3 dimensions. What does 5 and 3 represent?
  3. What are 2 and 3 in h0 and c0, what do those represent?
import torch,ipdb
import torch.autograd as autograd
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from torch.autograd import Variable
import torch.nn.functional as F

num_layers=3
num_hyperparams=4
batch = 1
hidden_size = 20
rnn = nn.LSTM(input_size=num_hyperparams, hidden_size=hidden_size, num_layers=num_layers)

input = Variable(torch.randn(1, batch, num_hyperparams)) # (seq_len, batch, input_size)
h0 = Variable(torch.randn(num_layers, batch, hidden_size)) # (num_layers, batch, hidden_size)
c0 = Variable(torch.randn(num_layers, batch, hidden_size))
output, hn = rnn(input, (h0, c0))
affine1 = nn.Linear(hidden_size, num_hyperparams)

ipdb.set_trace()
print output.size()
print h0.size()

推荐答案

LSTM的输出是最后一层上所有隐藏节点的输出.
hidden_size-每层LSTM块的数量.
input_size-每个时间步的输入要素的数量.
num_layers-隐藏层数.
总共有hidden_size * num_layers个LSTM块.

The output for the LSTM is the output for all the hidden nodes on the final layer.
hidden_size - the number of LSTM blocks per layer.
input_size - the number of input features per time-step.
num_layers - the number of hidden layers.
In total there are hidden_size * num_layers LSTM blocks.

输入尺寸为(seq_len, batch, input_size).
seq_len-每个输入流中的时间步数.
batch-每一批输入序列的大小.

The input dimensions are (seq_len, batch, input_size).
seq_len - the number of time steps in each input stream.
batch - the size of each batch of input sequences.

隐藏和单元格尺寸为:(num_layers, batch, hidden_size)

The hidden and cell dimensions are: (num_layers, batch, hidden_size)

因此将有hidden_size * num_directions输出.您没有将RNN初始化为双向的,所以num_directions是1.所以output_size = hidden_size.

So there will be hidden_size * num_directions outputs. You didn't initialise the RNN to be bidirectional so num_directions is 1. So output_size = hidden_size.

编辑:您可以使用线性图层来更改输出数量:

Edit: You can change the number of outputs by using a linear layer:

out_rnn, hn = rnn(input, (h0, c0))
lin = nn.Linear(hidden_size, output_size)
v1 = nn.View(seq_len*batch, hidden_size)
v2 = nn.View(seq_len, batch, output_size)
output = v2(lin(v1(out_rnn)))

注意:对于这个答案,我认为我们只是在谈论非双向LSTM.

Note: for this answer I assumed that we're only talking about non-bidirectional LSTMs.

来源: PyTorch文档.

这篇关于了解一个简单的LSTM pytorch的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!

09-03 10:05