网址:https://link.zhihu.com/?target=http%3A//pytorch.org/tutorials/
(一) What is PyTorch?
1.Tensors are similar to NumPy’s ndarrays, with the addition being that
Tensors can also be used on a GPU to accelerate computing.
2.Any operation that mutates a tensor in-place is post-fixed with an _.
For example: x.copy_(y), x.t_(), will change x.
3.The Torch Tensor and NumPy array will share their underlying memory
locations (if the Torch Tensor is on CPU), and changing one will change
the other。(在cpu上,改变tensor的同时也会改变numpy,反之亦然)
4.All the Tensors on the CPU except a CharTensor support converting to
NumPy and back.
(二) Autograd: Automatic Differentiation
1.使得tensor不再参与到梯度计算的两种方式:
(1)call .detach() function
(2)将代码块放到with torch.no_grad():里
2.If you want to compute the derivatives, you can call .backward() on
a Tensor. If Tensor is a scalar (i.e. it holds a one element
data), you don’t need to specify any arguments to backward(),
however if it has more elements, you need to specify a gradientargument that is a tensor of matching shape.
3..requires_grad_(...) changes an existing Tensor’s requires_gradflag in-place. The input flag defaults to False if not given(如果没有明确设置,则requires_grad默认为false)
4. Each tensor has
a .grad_fn attribute that references a Function that has created
the Tensor (except for Tensors created by the user - their grad_fn is None)(每个tensor都记录着创建它的操作,例如该tensor通过加操作被创作,则会有grad_fn=<AddBackward0>,若其初始是被人所创建,则grad_fn为None)