问题描述
试图围绕渐变的表示方式以及 autograd 的工作原理:
trying to wrap my head around how gradients are represented and how autograd works:
import torch
from torch.autograd import Variable
x = Variable(torch.Tensor([2]), requires_grad=True)
y = x * x
z = y * y
z.backward()
print(x.grad)
#Variable containing:
#32
#[torch.FloatTensor of size 1]
print(y.grad)
#None
为什么它不会为 y
产生渐变?如果y.grad = dz/dy
,那么它不应该至少产生一个像y.grad = 2*y
这样的变量吗?
Why does it not produce a gradient for y
? If y.grad = dz/dy
, then shouldn't it at least produce a variable like y.grad = 2*y
?
推荐答案
-soumith chintala
-soumith chintala
参见:https://discuss.pytorch.org/t/why-cant-i-see-grad-of-an-intermediate-variable/94
调用y.retain_grad()
x = Variable(torch.Tensor([2]), requires_grad=True)
y = x * x
z = y * y
y.retain_grad()
z.backward()
print(y.grad)
#Variable containing:
# 8
#[torch.FloatTensor of size 1]
来源:https://discuss.pytorch.org/t/why-cant-i-see-grad-of-an-intermediate-variable/94/16
注册一个hook
,它基本上是一个在计算梯度时调用的函数.然后你可以保存它,分配它,打印它,不管......
Register a hook
, which is basically a function called when that gradient is calculated. Then you can save it, assign it, print it, whatever...
from __future__ import print_function
import torch
from torch.autograd import Variable
x = Variable(torch.Tensor([2]), requires_grad=True)
y = x * x
z = y * y
y.register_hook(print) ## this can be anything you need it to be
z.backward()
输出:
Variable containing: 8 [torch.FloatTensor of size 1
来源:https://discuss.pytorch.org/t/why-cant-i-see-grad-of-an-intermediate-variable/94/2
另见:https://discuss.pytorch.org/t/why-cant-i-see-grad-of-an-intermediate-variable/94/7
这篇关于为什么 autograd 不为中间变量产生梯度?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!