本文介绍了如何在多次运行中重现 RNN 结果?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我连续两次在同一个输入上调用同一个模型,但没有得到相同的结果,这个模型有 nn.GRU 层,所以我怀疑它有一些内部状态应该是在第二次运行之前发布?

I call same model on same input twice in a row and I don't get the same result, this model have nn.GRU layers so I suspect that it have some internal state that should be release before second run?

如何重置 RNN 隐藏状态以使其与模型最初加载时相同?

How to reset RNN hidden state to make it the same as if model was initially loaded?

更新:

一些上下文:

我正在尝试从这里运行模型:

I'm trying to run model from here:

https://github.com/erogol/WaveRNN/blob/master/models/wavernn.py#L93

我正在调用generate:

https://github.com/erogol/WaveRNN/blob/master/models/wavernn.py#L148

这里实际上有一些在 pytorch 中使用随机生成器的代码:

Here it's actually have some code using random generator in pytorch:

https://github.com/erogol/WaveRNN/blob/master/models/wavernn.py#L200

https://github.com/erogol/WaveRNN/blob/master/utils/distribution.py#L110

https://github.com/erogol/WaveRNN/blob/master/utils/distribution.py#L129

我已经放置(我在 CPU 上运行代码):

I have placed (I'm running code on CPU):

torch.manual_seed(0)
torch.cuda.manual_seed_all(0)
torch.backends.cudnn.deterministic = True
torch.backends.cudnn.benchmark = False
np.random.seed(0)

https://github.com/erogol/WaveRNN/blob/master/utils/distribution.py

毕竟是进口的.

我已经检查了两次运行之间的 GRU 权重,它们是相同的:

I have checked GRU weights between runs and they are the same:

https://github.com/erogol/WaveRNN/blob/master/models/wavernn.py#L153

我还检查了运行和 logits 之间的 logitssample 是相同的但 sample 不是,所以@Andrew Naguib 似乎对随机种子是正确的,但我不确定修复随机种子的代码应该放在哪里?

Also I have checked logits and sample between runs and logits are the same but sample are not, so @Andrew Naguib seems were right about random seeding, but I'm not sure where the code that fixes random seed should be placed?

https://github.com/erogol/WaveRNN/blob/master/models/wavernn.py#L200

更新 2:

我已经将种子初始化放在 generate 中,现在结果是一致的:

I have placed seed init inside generate and now results are consistent:

https://github.com/erogol/WaveRNN/blob/master/models/wavernn.py#L148

推荐答案

我相信这可能与 随机播种.为了确保可重复的结果(如他们所述),您必须播种 torch 如下:

I believe this may be highly related to Random Seeding. To ensure reproducible results (as stated by them) you have to seed torch as in this:

import torch
torch.manual_seed(0)

还有 CuDNN 模块.

torch.backends.cudnn.deterministic = True
torch.backends.cudnn.benchmark = False

如果你使用 numpy,你也可以这样做:

If you're using numpy, you could also do:

import numpy as np
np.random.seed(0)

但是,他们警告您:

确定性模式可能会对性能产生影响,具体取决于您的模型.

我经常使用的一个建议的脚本已经很好地重现结果:


A suggested script I regularly use which has been working very good to reproduce results is:

# imports
import numpy as np
import random
import torch
# ...
""" Set Random Seed """
if args.random_seed is not None:
    """Following seeding lines of code are to ensure reproducible results
       Seeding the two pseudorandom number generators involved in PyTorch"""
    random.seed(args.random_seed)
    np.random.seed(args.random_seed)
    torch.manual_seed(args.random_seed)
    # https://pytorch.org/docs/master/notes/randomness.html#cudnn
    if not args.cpu_only:
        torch.cuda.manual_seed(args.random_seed)
        cudnn.deterministic = True
        cudnn.benchmark = False

这篇关于如何在多次运行中重现 RNN 结果?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!

08-20 06:08