本文介绍了如何为 TensorFlow 中的各个层定义权重衰减?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!
问题描述
在 CUDA ConvNet 中,我们可以这样写(source) 对于每一层:
In CUDA ConvNet, we can write something like this (source) for each layer:
[conv32]
epsW=0.001
epsB=0.002
momW=0.9
momB=0.9
wc=0
其中 wc=0
指的是 L2 权重衰减.
where wc=0
refers to the L2 weight decay.
如何在 TensorFlow 中实现相同的功能?
How can the same be achieved in TensorFlow?
推荐答案
您可以将所有要添加权重衰减的变量添加到集合名称变量"中,然后计算整个 L2 范数权重衰减收藏.
You can add all the variables you want to add weight decay to, to a collection name 'variables' and then you calculate the L2 norm weight decay for the whole collection.
# Create your variables
weights = tf.get_variable('weights', collections=['variables'])
with tf.variable_scope('weights_norm') as scope:
weights_norm = tf.reduce_sum(
input_tensor = WEIGHT_DECAY_FACTOR*tf.pack(
[tf.nn.l2_loss(i) for i in tf.get_collection('weights')]
),
name='weights_norm'
)
# Add the weight decay loss to another collection called losses
tf.add_to_collection('losses', weights_norm)
# Add the other loss components to the collection losses
# ...
# To calculate your total loss
tf.add_n(tf.get_collection('losses'), name='total_loss')
这篇关于如何为 TensorFlow 中的各个层定义权重衰减?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!