问题描述
我所追求的功能是在给定一些数据的情况下,能够判断给定变量相对于我的误差函数的梯度是多少.
The feature I'm after is to be able to tell what the gradient of a given variable is with respect to my error function given some data.
这样做的一种方法是查看调用训练后变量发生了多少变化,但显然这可能会因学习算法而有很大差异(例如,使用 RProp 之类的东西几乎无法判断)而且不是很干净.
One way to do this would be to see how much the variable has changed after a call to train, but obviously that can vary massively based on the learning algorithm (for example it would be almost impossible to tell with something like RProp) and just isn't very clean.
提前致谢.
推荐答案
tf.gradients()
函数允许您计算一个张量相对于一个或多个其他张量的符号梯度——包括变量.考虑以下简单示例:
The tf.gradients()
function allows you to compute the symbolic gradient of one tensor with respect to one or more other tensors—including variables. Consider the following simple example:
data = tf.placeholder(tf.float32)
var = tf.Variable(...) # Must be a tf.float32 or tf.float64 variable.
loss = some_function_of(var, data) # some_function_of() returns a `Tensor`.
var_grad = tf.gradients(loss, [var])[0]
然后您可以使用这个符号梯度来评估某个特定点(数据)的梯度:
You can then use this symbolic gradient to evaluate the gradient in some specific point (data):
sess = tf.Session()
var_grad_val = sess.run(var_grad, feed_dict={data: ...})
这篇关于如何获得 TensorFlow 变量的损失梯度?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!