下午好。
我是tensorflow新手,目前正在尝试解决此问题:
1)得到一个简单的神经网络,对其进行训练,打印精度(完成)
2)保存(完成)
3)恢复(完成)
4)将恢复的权重随机设置为零。 (&)
我已经阅读了以下主题:Dynamically changing weights in TensorFlow并从那里尝试了几件事,但无济于事。
这是我的代码:
from __future__ import print_function
import tensorflow as tf
# Import MNIST data
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("/tmp/data/", one_hot=True)
# Parameters
learning_rate = 0.01
training_epochs = 20
batch_size = 100
display_step = 1
# tf Graph Input
x = tf.placeholder(tf.float32, [None, 784]) # mnist data image 28*28
y = tf.placeholder(tf.float32, [None, 10]) # 0-9 digits recognition => 10 classes
# Set model weights
W = tf.Variable(tf.zeros([784, 10]))
b = tf.Variable(tf.zeros([10]))
# Construct model
pred = tf.nn.softmax(tf.matmul(x, W) + b) # Softmax
# Minimize error using cross entropy
cost = tf.reduce_mean(-tf.reduce_sum(y*tf.log(pred), reduction_indices=1))
# Gradient Descent
optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(cost)
# Initializing the variables
init_op = tf.global_variables_initializer()
saver = tf.train.Saver()
# Launch the graph
with tf.Session() as sess:
sess.run(init_op)
# Training cycle
for epoch in range(training_epochs):
avg_cost = 0.
total_batch = int(mnist.train.num_examples/batch_size)
# Loop over all batches
for i in range(total_batch):
batch_xs, batch_ys = mnist.train.next_batch(batch_size)
# Run optimization op (backprop) and cost op (to get loss value)
_, c = sess.run([optimizer, cost], feed_dict={x: batch_xs,
y: batch_ys})
# Compute average loss
avg_cost += c / total_batch
# Display logs per epoch step
if (epoch+1) % display_step == 0:
print("Epoch:", '%04d' % (epoch+1), "cost=", "{:.9f}".format(avg_cost))
print("Optimization Finished!")
# Test model
correct_prediction = tf.equal(tf.argmax(pred, 1), tf.argmax(y, 1))
# Calculate accuracy
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
# Save the variables to disk.
save_path = saver.save(sess,"/Users/mac/PycharmProjects/untitled1/MyModel",
write_meta_graph=True)
print("Model saved in file: %s" % save_path)
print("Accuracy_old:", accuracy.eval({x: mnist.test.images, y:mnist.test.labels}))
new_saver = tf.train.import_meta_graph('MyModel.meta')
new_saver.restore(sess, tf.train.latest_checkpoint('./'))
all_vars = tf.get_collection('vars')
for v in all_vars:
v_ = sess.run(v)
print(v_)
#Rand = tf.Variable(tf.random_normal([784, 10]))
#Zeroes = tf.mul(tf.zeros([784, 10]),Rand)
#W = tf.mul(Zeroes,Rand)
W = tf.mul(W, 0)
print("Accuracy_new:", accuracy.eval({x: mnist.test.images, y:mnist.test.labels}))
我尝试使用随机分布乘以零,而不是简单地使用0,没有任何变化,即使当我尝试将W = 0时,精度也相同。
非常感谢别人的建议。
最佳答案
线
W = tf.mul(W, 0)
在图形中创建一个未被任何人使用的新节点-
accuracy
仍在使用旧的W
,这就是为什么看不到任何变化的原因。更改W
的方法是使用TensorFlow的assign并运行它(请参阅How to assign value to a tensorflow variable?),类似assign_op = W.assign(tf.mul(W,0))
sess.run(assign_op)
关于python - 如何在 tensorflow 中将恢复的权重随机设置为零?,我们在Stack Overflow上找到一个类似的问题:https://stackoverflow.com/questions/41767003/