autograde
- requires_grad 属性:设置属性为true则跟踪该张量上的运算
- requires_grad_() 函数:设置requires_grad属性
- grad 属性:梯度值
- backward() 函数 : 计算梯度
- with torch.no_grad() 函数: 在该代码块中不跟踪张量
- detach() 函数: 撤回跟踪张量设置
- grad_fn 属性:创建张量的函数
例1
x=torch.ones(2,2)
y=x+2
print(x)
print(y)
x=torch.ones(2,2,requires_grad=True)
y=x+2
print(x)
print(y)
print(y.grad_fn)
output:
tensor([[1., 1.],
[1., 1.]])
tensor([[3., 3.],
[3., 3.]])
tensor([[1., 1.],
[1., 1.]], requires_grad=True)
tensor([[3., 3.],
[3., 3.]], grad_fn=<AddBackward0>)
<AddBackward0 object at 0x0000027286D72B38>
例2
a=torch.randn(2,2)
a=((a*3)/(a-1))
print(a.requires_grad)
a.requires_grad_(True)
print(a.requires_grad)
b=(a*a).sum()
print(b.grad_fn)
output:
False
True
<SumBackward0 object at 0x0000023D90762B00>
例3
x=torch.ones(2,2,requires_grad=True)
y=x+2
z=y*y*3
out=z.mean()
print(out)
out.backward()
print(x.grad)
output:
tensor(27., grad_fn=<MeanBackward0>)
tensor([[4.5000, 4.5000],
[4.5000, 4.5000]])
例4
x=torch.randn(3,requires_grad=True)
y=x*2
v=torch.tensor([0.1,1.0,0.0001],dtype=torch.float) #尺度与y一致
y.backward(v)
print(x.grad)
output:
tensor([2.0000e-01, 2.0000e+00, 2.0000e-04])
例5
x=torch.randn(2,2,requires_grad=True)
print(x.requires_grad)
print((x**2).requires_grad)
with torch.no_grad():
print((x**2).requires_grad)
output:
True
True
False