我正在尝试重新训练inception-resnet-v2的最后一层。这是我想出的:
train_op
以仅最小化这些变量wrt loss 我实现了如下:
with slim.arg_scope(arg_scope):
logits = model(images_ph, is_training=True, reuse=None)
loss = tf.reduce_mean(tf.nn.sparse_softmax_cross_entropy_with_logits(logits, labels_ph))
accuracy = tf.contrib.metrics.accuracy(tf.argmax(logits, 1), labels_ph)
train_list = tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES, 'InceptionResnetV2/Logits')
optimizer = tf.train.AdamOptimizer(learning_rate=FLAGS.learning_rate)
train_op = optimizer.minimize(loss, var_list=train_list)
# restore all variables whose names doesn't contain 'logits'
restore_list = tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES, scope='^((?!Logits).)*$')
saver = tf.train.Saver(restore_list, write_version=tf.train.SaverDef.V2)
with tf.Session() as session:
init_op = tf.group(tf.local_variables_initializer(), tf.global_variables_initializer())
session.run(init_op)
saver.restore(session, '../models/inception_resnet_v2_2016_08_30.ckpt')
# followed by code for running train_op
这似乎不起作用(训练损耗,误差与初始值相比没有太大改善)。有没有更好/更好的方法来做到这一点?如果您还可以告诉我这里出了什么问题,对我来说是个好学习。
最佳答案
有几件事:
from nets import inception_resnet_v2 as net
with net.inception_resnet_v2_arg_scope():
logits, end_points = net.inception_resnet_v2(images_ph, num_classes=num_classes,
is_training=True)
regularization_losses = tf.get_collection(tf.GraphKeys.REGULARIZATION_LOSSES)
all_losses = [loss] + regularization_losses
total_loss = tf.add_n(all_losses, name='total_loss')
with ops.name_scope('train_op'):
train_op = control_flow_ops.with_dependencies([train_op], total_loss)
关于python - 训练Inception-ResNet-v2的最后一层,我们在Stack Overflow上找到一个类似的问题:https://stackoverflow.com/questions/41407124/