本文介绍了如何在pytorch神经网络中的层中循环创建变量名的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!
问题描述
我正在PyTorch中实现一个简单的前馈神经纽特.但是,我想知道是否有更好的方法向网络添加灵活的层数?也许是在一个循环中命名它们,但是我听说那不可能吗?
I am implementing a straightforward feedforward neural newtork in PyTorch. However I am wondern if theres a nicer way to add a flexible amount of layer to the network? Maybe by naming them during a loop, but i heard thats impossible?
目前我正在这样做
import torch
import torch.nn as nn
import torch.nn.functional as F
class Net(nn.Module):
def __init__(self, input_dim, output_dim, hidden_dim):
super(Net, self).__init__()
self.input_dim = input_dim
self.output_dim = output_dim
self.hidden_dim = hidden_dim
self.layer_dim = len(hidden_dim)
self.fc1 = nn.Linear(self.input_dim, self.hidden_dim[0])
i = 1
if self.layer_dim > i:
self.fc2 = nn.Linear(self.hidden_dim[i-1], self.hidden_dim[i])
i += 1
if self.layer_dim > i:
self.fc3 = nn.Linear(self.hidden_dim[i-1], self.hidden_dim[i])
i += 1
if self.layer_dim > i:
self.fc4 = nn.Linear(self.hidden_dim[i-1], self.hidden_dim[i])
i += 1
if self.layer_dim > i:
self.fc5 = nn.Linear(self.hidden_dim[i-1], self.hidden_dim[i])
i += 1
if self.layer_dim > i:
self.fc6 = nn.Linear(self.hidden_dim[i-1], self.hidden_dim[i])
i += 1
if self.layer_dim > i:
self.fc7 = nn.Linear(self.hidden_dim[i-1], self.hidden_dim[i])
i += 1
if self.layer_dim > i:
self.fc8 = nn.Linear(self.hidden_dim[i-1], self.hidden_dim[i])
i += 1
self.fcn = nn.Linear(self.hidden_dim[-1], self.output_dim)
def forward(self, x):
# Max pooling over a (2, 2) window
x = F.relu(self.fc1(x))
i = 1
if self.layer_dim > i:
x = F.relu(self.fc2(x))
i += 1
if self.layer_dim > i:
x = F.relu(self.fc3(x))
i += 1
if self.layer_dim > i:
x = F.relu(self.fc4(x))
i += 1
if self.layer_dim > i:
x = F.relu(self.fc5(x))
i += 1
if self.layer_dim > i:
x = F.relu(self.fc6(x))
i += 1
if self.layer_dim > i:
x = F.relu(self.fc7(x))
i += 1
if self.layer_dim > i:
x = F.relu(self.fc8(x))
i += 1
x = F.softmax(self.fcn(x))
return x
推荐答案
您可以将图层放入 ModuleList
容器:
You can put your layers in a ModuleList
container:
import torch
import torch.nn as nn
import torch.nn.functional as F
class Net(nn.Module):
def __init__(self, input_dim, output_dim, hidden_dim):
super(Net, self).__init__()
self.input_dim = input_dim
self.output_dim = output_dim
self.hidden_dim = hidden_dim
current_dim = input_dim
self.layers = nn.ModuleList()
for hdim in hidden_dim:
self.layers.append(nn.Linear(current_dim, hdim))
current_dim = hdim
self.layers.append(nn.Linear(current_dim, output_dim))
def forward(self, x):
for layer in self.layers[:-1]:
x = F.relu(layer(x))
out = F.softmax(self.layers[-1](x))
return out
对于各层使用 pytorch容器非常重要,并且不只是一个简单的python列表.请参阅此答案以了解原因.
It is very important to use pytorch Containers for the layers, and not just a simple python lists. Please see this answer to know why.
这篇关于如何在pytorch神经网络中的层中循环创建变量名的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!