本文介绍了批量标准化 - Tensorflow的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我看了几个 BN 的例子,但仍然有点困惑.所以我目前正在使用这个函数,它在这里调用函数;

I have looked at a few BN examples but still am a bit confused. So I am currently using this function which calls the function here;

https://github.com/tensorflow/tensorflow/blob/master/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.layers.batch_norm.md

from tensorflow.contrib.layers.python.layers import batch_norm as batch_norm
import tensorflow as tf

def bn(x,is_training,name):
    bn_train = batch_norm(x, decay=0.9, center=True, scale=True,
    updates_collections=None,
    is_training=True,
    reuse=None,
    trainable=True,
    scope=name)
    bn_inference = batch_norm(x, decay=1.00, center=True, scale=True,
    updates_collections=None,
    is_training=False,
    reuse=True,
    trainable=False,
    scope=name)
    z = tf.cond(is_training, lambda: bn_train, lambda: bn_inference)
    return z

以下部分是一个玩具运行,我只是检查该函数是否重复使用在训练步骤中计算的两个特征的均值和方差.在测试模式下运行这部分代码,即 is_training=False,在训练步骤中计算的运行均值/方差正在发生变化,当我们打印出我从调用 bnParams

This following part is a toy run where I am just checking that the function reuses the means and variances calculated in a training step for two features. Running this part of the code in test mode i.e. is_training=False, the running mean/variances calculated in the training step are changing which can be seen when we print out the BN variables which I get from calling bnParams

if __name__ == "__main__":
    print("Example")

    import os
    import numpy as np
    import scipy.stats as stats
    np.set_printoptions(suppress=True,linewidth=200,precision=3)
    np.random.seed(1006)
    import pdb
    path = "batchNorm/"
    if not os.path.exists(path):
        os.mkdir(path)
    savePath = path + "bn.model"

    nFeats = 2
    X = tf.placeholder(tf.float32,[None,nFeats])
    is_training = tf.placeholder(tf.bool,name="is_training")
    Y = bn(X,is_training=is_training,name="bn")
    mvn = stats.multivariate_normal([0,100])
    bs = 4
    load = 0
    train = 1
    saver = tf.train.Saver()
    def bnCheck(batch,mu,std):
        # Checking calculation
        return (x - mu)/(std + 0.001)
    with tf.Session() as sess:
        if load == 1:
            saver.restore(sess,savePath)
        else:
            tf.global_variables_initializer().run()
        #### TRAINING #####
        if train == 1:
            for i in xrange(100):
                x = mvn.rvs(bs)
                y = Y.eval(feed_dict={X:x, is_training.name: True})

        def bnParams():
            beta, gamma, mean, var = [v.eval() for v in tf.get_collection(tf.GraphKeys.GLOBAL_VARIABLES,scope="bn")]
            return beta, gamma, mean, var

        beta, gamma, mean, var = bnParams()
        #### TESTING #####
        for i in xrange(10):
            x = mvn.rvs(1).reshape(1,-1)
            check = bnCheck(x,mean,np.sqrt(var))
            y = Y.eval(feed_dict={X:x, is_training.name: False})
            print("x = {0}, y = {1}, check = {2}".format(x,y,check))
            beta, gamma, mean, var = bnParams()
            print("BN Params: Beta {0} Gamma {1} mean {2} var{3} \n".format(beta,gamma,mean,var))

        saver.save(sess,savePath)

测试循环的前三个迭代如下所示;

The first three iterations of test loop look as follows;

x = [[  -1.782  100.941]], y = [[-1.843  1.388]], check = [[-1.842  1.387]]
BN Params: Beta [ 0.  0.] Gamma [ 1.  1.] mean [ -0.2   99.93] var[ 0.818  0.589]

x = [[  -1.245  101.126]], y = [[-1.156  1.557]], check = [[-1.155  1.557]]
BN Params: Beta [ 0.  0.] Gamma [ 1.  1.] mean [  -0.304  100.05 ] var[ 0.736  0.53 ]

x = [[ -0.107  99.349]], y = [[ 0.23  -0.961]], check = [[ 0.23 -0.96]]
BN Params: Beta [ 0.  0.] Gamma [ 1.  1.] mean [ -0.285  99.98 ] var[ 0.662  0.477]

我不做 BP,所以 beta 和 gamma 不会改变.但是,我的运行方式/方差正在发生变化.我哪里出错了?

I am not doing BP so beta and gamma won't change. However my running means/variances are changing. Where am I going wrong?

最好知道为什么这些变量需要/不需要在测试和训练之间改变;

It would be good to know why these variables need/do not need changing between test and train;

updates_collections, reuse, trainable

推荐答案

你的 bn 函数有问题.改用这个:

Your bn function is wrong. Use this instead:

def bn(x,is_training,name):
    return batch_norm(x, decay=0.9, center=True, scale=True,
    updates_collections=None,
    is_training=is_training,
    reuse=None,
    trainable=True,
    scope=name)

is_training 是 bool 0-D 张量,表示是否更新运行平均值等.然后只需更改张量 is_training 就可以表明您是处于训练阶段还是测试阶段.

is_training is bool 0-D tensor signaling whether to update running mean etc. or not. Then by just changing the tensor is_training you're signaling whether you're in training or test phase.

tensorflow 中的许多操作都接受张量,而不是常量 True/False 数字参数.

Many operations in tensorflow accept tensors, and not constant True/False number arguments.

这篇关于批量标准化 - Tensorflow的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!

08-10 22:44