SqueezeNet

The SqueezeNet architecture

Smaller CNNs offer at least three advantages: less computation, less bandwidth and more feasible to deploy on FPGAs. SqueezeNet achieves AlexNet-level accuracy on ImageNet with 50x fewer parameters. Additionally, with model compression techniques we are able to compress SqueezeNet to less than 0.5MB.

  • Strategy 1. Replace 3x3 filters with 1x1 filters.
  • Strategy 2. Decrease the number of input channels to 3x3 filters.
  • Strategy 3. Downsample late in the network so that convolution layers have large activation maps.
Fire Module
Macroarchitectural view of our SqueezeNet architecture

Experiment

References:
SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5MB model size, 2017,arXiv: Computer Vision and Pattern Recognition

最后编辑于
©著作权归作者所有,转载或内容合作请联系作者
平台声明:文章内容(如有图片或视频亦包括在内)由作者上传并发布,文章内容仅代表作者本人观点,简书系信息发布平台,仅提供信息存储服务。

推荐阅读更多精彩内容