In TensorFlowL2 (Tikhonov) regularization with regularization parameter lambda_could be written like this:# Assuming you defined a graph, placeholders and logits layer.# Using cross entropy loss:lambda_ = 0.1xentropy = tf.nn.softmax_cross_entropy_with_logits_v2(labels=y, logits=logits)ys = tf.reduce_mean(xentropy)l2_norms = [tf.nn.l2_loss(v) for v in tf.trainable_variables()]l2_norm = tf.reduce_sum(l2_norms)cost = ys + lambda_*l2_norm# from here, define optimizer, train operation and train ... :-) 这篇关于如何规范损失函数?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!
09-25 06:57