如何获得相同的损失值

如何获得相同的损失值

本文介绍了每次使用TensorFlow训练CNN(MNIST数据集)时,如何获得相同的损失值?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我想训练卷积神经网络(使用MNIST数据集和TensorFlow)几次,每次都获得相同的精度结果.要获得这个,我:

I want to train a convolutional neural network (with MNIST data set and TensorFlow) a few times new and get every time the same results of the accuracy. To get this i:

  1. 保存未经训练的仅初始化(global_variables_initializer)网络
  2. 每当我开始训练这个未经训练的网时,就会加载
  3. 设置mnist.train.next_batch shuffle = False,因此图像序列每次都相同

我以前使用前馈网络(3个隐藏层)来完成此操作,每次运行此python脚本时,我都会得到完全相同的损耗和准确性值.

I have done this before with a feed forward net (3 hidden layer) and every time I run this python script I get the exact same values for loss and accuracy.

但是,将模型从前馈网络更改为卷积神经网络的相同"脚本使每次运行脚本的损失/准确性都有所不同.

But, the "same" script with changing the model from a feed forward net to a convolutional neural net make every time I run the script a little different loss/accuracy.

因此,我将批处理大小减小为一个,并为每个图像查找损失值,并看到前两个图像始终具有相同的损失值,但是其余的每次运行脚本时都稍有不同.

So I reduce the batch size to one and look for each image the loss value and see that the first two images always have the same loss value, but the rest is every time I run the script a little different.

知道为什么吗?

推荐答案

感谢@AlexandrePassos评论,我在TensorFlow中搜索确定性/非确定性操作.

Thanks to @AlexandrePassos comment, I search for deterministic/ non-deterministic operations in TensorFlow.

因此,目前所有使用CUDA原子并在GPU上运行的操作都是不确定的.
看到此链接: https://github.com/tensorflow/tensorflow/issues/3103

So at the moment all operations which using CUDA atomics and running on GPU, are non-deterministic.
See this link: https://github.com/tensorflow/tensorflow/issues/3103

如果有人知道在GPU上使用TensorFlow和确定性操作来实现CNN的方法,请:

If somebody knows a way to realize a CNN with TensorFlow on GPU and with deterministic operations, please: How to create a CNN with deterministic operations in TensorFlow on a GPU?

这篇关于每次使用TensorFlow训练CNN(MNIST数据集)时,如何获得相同的损失值?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!