shufflenet v1笔记

group Convolution

在普通的卷积中,channels 即同一个卷积对所有的channels操作,然后相加.
而group convolution,即简单的讲就是把 channel 做N等分(N个group),然后每一份(一个group)分别与上一层的输出的M/N个channel独立连接,之后将每个group的输出叠在一起(concatenate),作为这一层的输出 channel.

group conv最早出现在AlexNet[1]中,因为显卡显存不够,只好把网络分在两块卡里.

mobilenet v1 中的depthwise convolution操作其实是每一个channel都为一个group的特殊情况

shufflenet

Channel Shuffle

介于每个channel都用单独一个卷积 (Pointwise convolution)or 所有通道共用一个卷积(传统卷积).

Channel Shuffle 提出了将channel 分组,然后仅在分组内进行Pointwise卷积.

但是,如果多个组卷积叠加在一起,则会产生一个副作用:某个通道的输出仅来自一小部分输入通道。如图(a)所示.该组的输出仅与该组内的输入有关,阻碍了通道间的信息流.

如果我们允许group convolution 从不同的channel中获取信息(如图b所示),则输入通道和输出通道信息完全相关

shufflenet用 channel shuffle 来实现这一效果(如图c)


image

具体做法:
假设一个卷积层有g X n个输出channel(g 个group),

  • (1)先将输出通道reshape维度为(g,n)
  • (2)transpose:将通道信息变为(n,g),通道信息随机变换
  • (3)reshape,将通道恢复原来的shape

在transpose过程中进行了通道混乱

pytroch 代码如图:

def shuffle_channels(x, groups):
    """shuffle channels of a 4-D Tensor"""
    batch_size, channels, height, width = x.size()
    assert channels % groups == 0
    channels_per_group = channels // groups
    # split into groups
    
    x = x.view(batch_size, groups, channels_per_group,
               height, width)
    # transpose 1, 2 axis
    x = x.transpose(1, 2).contiguous()
    # reshape into orignal
    x = x.view(batch_size, channels, height, width)
    return x

shufflenet v1 bottleneck

shuffle net的组件如图所示:

image

(b)代表了stride 为 1 ,
(c)stride 为2

image
import  torch
import  torch.nn as nn
import torch.nn.functional as F
def shuffle_channels(x, groups):
    """shuffle channels of a 4-D Tensor"""
    batch_size, channels, height, width = x.size()
    assert channels % groups == 0
    channels_per_group = channels // groups
    # split into groups

    x = x.view(batch_size, groups, channels_per_group,
               height, width)
    # transpose 1, 2 axis
    x = x.transpose(1, 2).contiguous()
    # reshape into orignal
    x = x.view(batch_size, channels, height, width)
    return x

class ShuffleBottleNeck(nn.Module):
    def __init__(self,in_channels,out_channels,stride,groups):
        super(ShuffleBottleNeck,self).__init__()

        self.stride = stride
        self.groups = groups
        # bottleneck层中间层的channel数变为输出channel数的1/4
        #we set the number of bottleneck channels to 1/4 of the output channels for each ShuffleNetunit.
        mid_channels = int(out_channels / 4)

        set_groups = groups if in_channels!=24 else 1
        # 作者提到不在stage2的第一个pointwise层使用组卷积,因为输入channel数量太少,只有24
        self.conv1 = nn.Conv2d(in_channels,mid_channels,kernel_size=1,
                               groups=set_groups,bias=False)
        self.bn1 = nn.BatchNorm2d(mid_channels)

        self.conv2 = nn.Conv2d(mid_channels,mid_channels,kernel_size=3,
                               groups=mid_channels,padding=1,stride=stride,
                               bias=False)
        self.bn2 = nn.BatchNorm2d(mid_channels)

        self.conv3 = nn.Conv2d(mid_channels,out_channels,kernel_size=1,
                               groups=groups,bias=False)
        self.bn3 = nn.BatchNorm2d(out_channels)

        self.shortcut = nn.Sequential()
        if stride ==2:
            self.shortcut = nn.Sequential(nn.AvgPool2d(3,stride=2,padding=1))
    def forward(self, x):
        out = torch.nn.functional.relu(self.bn1(self.conv1(x)))
        out = shuffle_channels(out,self.groups)
        out = self.bn2(self.conv2(out))
        out = self.bn3(self.conv3(out))
        res = self.shortcut(x)
        out = F.relu(torch.cat([out, res], 1)) if self.stride == 2 else F.relu(out + res)
        return out


class ShuffleNet(nn.Module):
    def __init__(self, cfg):
        super(ShuffleNet, self).__init__()
        out_planes = cfg['out_planes']
        num_blocks = cfg['num_blocks']
        groups = cfg['groups']

        self.conv1 = nn.Conv2d(3, 24, kernel_size=1, bias=False)
        self.bn1 = nn.BatchNorm2d(24)
        self.in_planes = 24
        self.layer1 = self._make_layer(out_planes[0], num_blocks[0], groups)
        self.layer2 = self._make_layer(out_planes[1], num_blocks[1], groups)
        self.layer3 = self._make_layer(out_planes[2], num_blocks[2], groups)
        self.linear = nn.Linear(out_planes[2], 10)

    def _make_layer(self, out_planes, num_blocks, groups):
        layers = []
        for i in range(num_blocks):
            if i == 0:
                layers.append(ShuffleBottleNeck(self.in_planes,
                                         out_planes-self.in_planes,
                                         stride=2, groups=groups))
            else:
                layers.append(ShuffleBottleNeck(self.in_planes,
                                         out_planes,
                                         stride=1, groups=groups))
            self.in_planes = out_planes
        return nn.Sequential(*layers)

    def forward(self, x):
        out = F.relu(self.bn1(self.conv1(x)))
        out = self.layer1(out)
        out = self.layer2(out)
        out = self.layer3(out)
        out = F.avg_pool2d(out, 4)
        out = out.view(out.size(0), -1)
        out = self.linear(out)
        return out


def ShuffleNetG2():
    cfg = {
        'out_planes': [200,400,800],
        'num_blocks': [4,8,4],
        'groups': 2
    }
    return ShuffleNet(cfg)

def ShuffleNetG3():
    cfg = {
        'out_planes': [240,480,960],
        'num_blocks': [4,8,4],
        'groups': 3
    }
    return ShuffleNet(cfg)


def test():
    net = ShuffleNetG2()
    x = torch.randn(1,3,32,32)
    y = net(x)
    print(y)

if __name__ == '__main__':
    test()



©著作权归作者所有,转载或内容合作请联系作者
【社区内容提示】社区部分内容疑似由AI辅助生成,浏览时请结合常识与多方信息审慎甄别。
平台声明:文章内容(如有图片或视频亦包括在内)由作者上传并发布,文章内容仅代表作者本人观点,简书系信息发布平台,仅提供信息存储服务。

相关阅读更多精彩内容

友情链接更多精彩内容