我使用输入管道方法将数据提供给图形,并实现了tf.train.shuffle_batch来生成批处理数据。但是,随着训练的进行, tensorflow 在以后的迭代中变得越来越慢。我对导致这种情况的本质原因感到困惑?非常感谢!我的代码段是:

def main(argv=None):

# define network parameters
# weights
# bias

# define graph
# graph network

# define loss and optimization method
# data = inputpipeline('*')
# loss
# optimizer

# Initializaing the variables
init = tf.initialize_all_variables()

# 'Saver' op to save and restore all the variables
saver = tf.train.Saver()

# Running session
print "Starting session... "
with tf.Session() as sess:

    # initialize the variables
    sess.run(init)

    # initialize the queue threads to start to shovel data
    coord = tf.train.Coordinator()
    threads = tf.train.start_queue_runners(coord=coord)

    print "from the train set:"
    for i in range(train_set_size * epoch):
        _, d, pre = sess.run([optimizer, depth_loss, prediction])

    print "Training Finished!"

    # Save the variables to disk.
    save_path = saver.save(sess, model_path)
    print("Model saved in file: %s" % save_path)

    # stop our queue threads and properly close the session
    coord.request_stop()
    coord.join(threads)
    sess.close()

最佳答案

训练时,您应该只执行一次sess.run。
建议尝试这样的方法,希望对您有所帮助:

with tf.Session() as sess:
  sess.run(tf.global_variables_initializer())
  for i in range(train_set_size * epoch):
    train_step.run([optimizer, depth_loss, prediction])

关于python - 当迭代大于10,000时,Tensorflow训练会变得越来越慢。为什么?,我们在Stack Overflow上找到一个类似的问题:https://stackoverflow.com/questions/41354261/

10-13 09:11