1.pytorch反向传播的机制
# Zero the gradients before running the backward pass.
model.zero_grad()
# Backward pass: compute gradient of the loss with respect to all the learnable parameters of the model. Internally, the parameters of each Module are stored in Tensors with requires_grad=True, so this call will compute gradients for all learnable parameters in the model.
loss.backward()
# Update the weights using gradient descent. Each parameter is a Tensor, so we can access its gradients like we did before.
with torch.no_grad():
for param in model.parameters():
param-=learning_rate*param.grad
上述代码与下面的代码相同:
optimizer.zero_grad()
loss.backward()
optimizer.step()
2. detach与clone的区别
detach:与原数据共享内存,但不提供梯度计算,即requires_grad=False
clone:不共享数据内存,但提供梯度计算