ThiNet

Approach

  • Filter selection
    The key idea is: if we can use a subset of channels in layer (i + 1)’s input to approximate the output in layer i + 1, the other channels can be safely removed from the input of layer i + 1, the other channels can be safely removed from the input of layer i + 1.

Here, |S| is the number of elements in a subset S, and r is a pre-defined compression rate. Now, given a set of m (the product of number of images and number of locations) training examples (x_i; y_i), the greedy algorithm is as follow.


  • Pruning
    Weak channels in layer (i + 1)’s input and their corresponding filters in layer i would be pruned away, leading to a much smaller model.
  • Fine-tuning
    Fine-tuning is a necessary step to recover the generalization ability damaged by filter pruning. Then iterate to step 1 to prune the next layer.

Experiment

ImageNet

References:
ThiNet: A Filter Level Pruning Method for Deep Neural Network Compression, Jian-Hao Luo, 2017, ICCV

最后编辑于
©著作权归作者所有,转载或内容合作请联系作者
平台声明:文章内容(如有图片或视频亦包括在内)由作者上传并发布,文章内容仅代表作者本人观点,简书系信息发布平台,仅提供信息存储服务。

推荐阅读更多精彩内容