尽管重用设置为true

尽管重用设置为true

本文介绍了尽管重用设置为true,Tensorflow仍创建新变量的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

限时删除!!

我正在尝试构建基本的RNN,但是在训练后尝试使用网络时出现错误.我在功能inference

I am trying to build a basic RNN, but I get errors trying to use the network after training.I hold network architecture in a function inference

def inference(inp):
    with tf.name_scope("inference"):
        layer = SimpleRNN(1, activation='sigmoid',   return_sequences=False)(inp)
        layer = Dense(1)(layer)

    return layer

但是每次我调用它时,尽管在训练中使用了相同的作用域,也会创建另一组变量:

but everytime i call it, another set of variables gets created despite using the same scope in training:

def train(sess, seq_len=2, epochs=100):
    x_input, y_input = generate_data(seq_len)

    with tf.name_scope('train_input'):
        x = tf.placeholder(tf.float32, (None, seq_len, 1))
        y = tf.placeholder(tf.float32, (None, 1))

    with tf.variable_scope('RNN'):
        output = inference(x)

    with tf.name_scope('training'):
        loss = tf.losses.mean_squared_error(labels=y, predictions=output)
        train_op = tf.train.RMSPropOptimizer(learning_rate=0.1).minimize(loss=loss, global_step=tf.train.get_global_step())

    with sess.as_default():
        sess.run([tf.global_variables_initializer(), tf.local_variables_initializer()])

        for i in tqdm.trange(epochs):
            ls, res, _ = sess.run([loss, output, train_op], feed_dict={x:x_input, y:y_input})
            if i%100==0:
                print(f'{ls}: {res[10]} - {y_input[10]}')
            x_input, y_input = generate_data(seq_len)

和预测:

def predict_signal(sess, x, seq_len):
    # Preparing signal (omitted)
    # Predict
    inp = tf.convert_to_tensor(prepared_signal, tf.float32)
    with sess.as_default():
        with tf.variable_scope('RNN', reuse=True) as scope:
            output = inference(inp)
            result = output.eval()

    return result

到目前为止,我已经花了几个小时来阅读有关变量范围的信息,但是在运行预测时仍然会收到错误Attempting to use uninitialized value RNN_1/inference/simple_rnn_2/kernel,每次调用RNN_1的次数都会增加

I have spent couple of hours reading about variables scopes by now, but on running prediction I still get an error Attempting to use uninitialized value RNN_1/inference/simple_rnn_2/kernel, with the number by RNN_1 increasing with each call

推荐答案

这只是猜测,直到您向我们展示SimpleRNN实现.但是,我怀疑SimpleRNN的实现非常差. tf.get_variabletf.Variable之间有一个不同的getween.我希望您的SimpleRNN使用tf.Variable.

This is just speculation until you show us the SimpleRNN implementation. However, I suspect that SimpleRNN is very badly implemented. There is a different getween tf.get_variable and tf.Variable. I expect your SimpleRNN to use tf.Variable.

要重现此行为:

import tensorflow as tf


def inference(x):
    w = tf.Variable(1., name='w')
    layer = x + w
    return layer


x = tf.placeholder(tf.float32)

with tf.variable_scope('RNN'):
    output = inference(x)

with tf.Session() as sess:
    sess.run(tf.global_variables_initializer())
    print(sess.run(output, {x: 10}))

    with sess.as_default():
        with tf.variable_scope('RNN', reuse=True):
            output2 = inference(x)

    print(sess.run(output2, {x: 10}))

这给出了完全相同的错误:

This gives exactly the same error:

但是使用w = tf.get_variable('w', initializer=1.)而不是w = tf.Variable(1., name='w')的版本可以正常工作.

However the version with w = tf.get_variable('w', initializer=1.) instead of w = tf.Variable(1., name='w') makes it work.

为什么?参见文档:

tf.get_variable:

修改谢谢您提出的问题(我在您的问题中添加了keras标志).现在,这正成为我最喜欢的向人们展示为什么使用Keras是他们做出的最糟糕决定的原因.

editThank you for the question (I added the keras flag to your question). This is now becoming my favorite reason to show people why using Keras is the worst decision they ever made.

SimpleRNN 在此处创建变量:

SimpleRNN creates it variables here:

self.kernel = self.add_weight(shape=(input_shape[-1], self.units),
                                      name='kernel',...)

执行该行

weight = K.variable(initializer(shape),
                    dtype=dtype,
                    name=name,
                    constraint=constraint)

其中此处结束

v = tf.Variable(value, dtype=tf.as_dtype(dtype), name=name)

这是实施中的明显缺陷.在Keras以正确的方式使用TensorFlow之前(至少尊重scopesvariable-collections),您应该寻找替代方案.有人可以为您提供的最佳建议是改用像官方tf.layers这样的更好的东西.

And this is an obvious flaw in the implementation.Until Keras uses TensorFlow in the correct way (respecting at least scopes and variable-collections), you should look for alternatives. The best advice somebody can give you is to switch to something better like the official tf.layers.

这篇关于尽管重用设置为true,Tensorflow仍创建新变量的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!

1403页,肝出来的..

09-06 17:14