Pytorch基础篇二

Pytorch基础篇二

定义网络

网络的定义的网络均需要通过torch.nn.Module实现,Net类继承torch.nn.Module类。用户定义的类包含__init__()forward()
__init__()函数中定义卷积层、全连接层等,forward()函数中定义relu、池化等操作

torch.nn仅支持mini-batch,维度的含义是nSamples x nChannels x Height x Width
对于仅有single sample的情况,则input.unsqueeze(x)可以在指定位置处添加一个假的batch维度

回顾

torch.Tensor - A multi-dimensional array.
autograd.Variable - Wraps a Tensor and records the history of operations applied to it. Has the same API as a Tensor, with some additions like backward(). Also holds the gradient w.r.t. the tensor.
nn.Module - Neural network module. Convenient way of encapsulating parameters, with helpers for moving them to GPU, exporting, loading, etc.
nn.Parameter - A kind of Variable, that is automatically registered as a parameter when assigned as an attribute to a Module.
autograd.Function - Implements forward and backward definitions of an autograd operation. Every Variable operation, creates at least a single Function node, that connects to functions that created a Variable and encodes its history.

快速定义网络

torch.nn.Sequential()实现,此时使用的均是nn.Module中的类,而不是像在nn.functional中的函数

损失函数和反向传播

欲得到损失先计算,先获得损失函数criterion=nn.MSELoss(),然后计算具体的损失loss=criterion(output,target)
反向传播过程中需要先将网络中参数的buffer置空net.zero_grad(),然后执行output.backward()进行反向传播。

更新权重

直接更新

learning_rate = 0.01
for f in net.parameters():
    f.data.sub_(f.grad.data * learning_rate)

使用torch.optim更新

import torch.optim as optim
optimizer = optim.SGD(net.parameters(), lr=0.01)
optimizer.zero_grad()
output = net(input)
loss = criterion(output, target)
loss.backward()
optimizer.step()

注意此时是对optimizer进行zero_grad()


Feb 23th,2018

©著作权归作者所有,转载或内容合作请联系作者
平台声明:文章内容(如有图片或视频亦包括在内)由作者上传并发布,文章内容仅代表作者本人观点,简书系信息发布平台,仅提供信息存储服务。

推荐阅读更多精彩内容