之间是否存在计算效率差异

之间是否存在计算效率差异

本文介绍了PyTorch 中的 nn.functional() 与 nn.sequential() 之间是否存在计算效率差异的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

以下是使用 PyTorch 中的 nn.functional() 模块的前馈网络

The following is a Feed-forward network using the nn.functional() module in PyTorch

import torch.nn as nn
import torch.nn.functional as F

class newNetwork(nn.Module):
    def __init__(self):
        super().__init__()
        self.fc1 = nn.Linear(784, 128)
        self.fc2 = nn.Linear(128, 64)
        self.fc3 = nn.Linear(64,10)

    def forward(self,x):
        x = F.relu(self.fc1(x))
        x = F.relu(self.fc2(x))
        x = F.softmax(self.fc3(x))
        return x

model = newNetwork()
model

以下是相同的前馈,使用 nn.sequential() 模块基本上构建相同的东西.两者有什么区别,我什么时候会用一个而不是另一个?

The following is the same Feed-forward using nn.sequential() module to essentially build the same thing. What is the difference between the two and when would i use one instead of the other?

input_size = 784
hidden_sizes = [128, 64]
output_size = 10

构建前馈网络

 model = nn.Sequential(nn.Linear(input_size, hidden_sizes[0]),
                      nn.ReLU(),
                      nn.Linear(hidden_sizes[0], hidden_sizes[1]),
                      nn.ReLU(),
                      nn.Linear(hidden_sizes[1], output_size),
                      nn.Softmax(dim=1))
    print(model)

推荐答案

两者没有区别.后者可以说更简洁,更容易编写,并且像 ReLUSigmoid 这样的纯(即无状态)函数的客观"版本的原因是允许它们使用在像 nn.Sequential.

There is no difference between the two. The latter is arguably more concise and easier to write and the reason for "objective" versions of pure (ie non-stateful) functions like ReLU and Sigmoid is to allow their use in constructs like nn.Sequential.

这篇关于PyTorch 中的 nn.functional() 与 nn.sequential() 之间是否存在计算效率差异的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!

07-27 19:30