Bag of tricks for Image Classification with CNN

Large-batch training

  1. Linear scaling learning rate
    • e.g. ResNet-50 SGD 256 batch size 0.1 learning rate
    • init learning rate = 0.1 * b/256. where b is the batch size
  2. Learning rate warm up
    • at the beginning, paras are far from the final solution
    • e.g. we use first m​ batch to warm up, the init learning rate is η​, at the i​ batch where 1 ≤ i ≤ m​, set the learning rate to be iη/m​
  3. Zero γ
    • Batch Normalization: γxˆ + β​ Normally, both elements γ​ and β​ are initialized to 1s and 0s
    • Instead of setting them in a normal way, it set it as γ = 0 to all BN layers that sit at the end of the residual block (最后一层residual block的BN层).
    • easy to train at the initial stage
  4. No bias decay
    • Weight decay will apply to both weight and bias
    • it recommended that only apply to weight regularization to avoid overfitting. BN parameters are left unregularized

Low-precision training (降低位数)

  1. Normal setting: 32-bit floating point (FP32) precision
  2. Trick switching it to larger batch size (1024) with FP16 and get higher accuracy

Model Tweaks

ResNet Architecture

Screen Shot 2019-02-01 at 3.32.42 PM.png
  1. ResNet-B
    • 为了避免 1x1 conv stride=2 带来的information loss
  2. ResNet-C
    • 为了避免计算量,使用两个3x3 conv代替一个7x7 conv
  3. ResNet-D
    • ResNet-B中path B中的1x1 conv stride=2还是会带来信息丢失,在之前加一个avgpool stride=2 能够有效避免信息丢失
Screen Shot 2019-02-01 at 3.29.10 PM.png

Training Refinement

  1. Cosine Learning Rate Decay

    • ηt =\frac{1}{2} (1 + cos (\frac{tπ}{T}))η
      • where T is the total number of batches (ignore warmup stage)
      • t is the current batch
      • η is the init learning rate
    • potentially improve the training progress
Cosine Decay
  1. Label Smoothing

  2. Knowledge Distillation

    1. 训练一个复杂的网络(N1)
    2. 使用数据train N1网络并得到(M1)
    3. 根据复杂网络设计一个简单网络 (N0)
    4. 将M1 softmax 设T=20 预测数据得到 soft target
    5. soft target 和 hard target加权得出Target (推荐0.1:0.9)
    6. 使用 label = Target 的数据集训练N0(T=20)得到 M0
    7. 设T=1,M0 模型为我们得到的训练好的精简模型
  3. Mixup Training

    • Data Augmentation

    • Weighted linear interpolation (双线性插值)

      • x = λx_i + (1 - λ)x_j
      • y = λy_i + (1 - λ)y_j
    • λ ∈ [0, 1] In mixup training, we only use (x, y)
  4. Result

Large Batch Training
Refinements
©著作权归作者所有,转载或内容合作请联系作者
平台声明:文章内容(如有图片或视频亦包括在内)由作者上传并发布,文章内容仅代表作者本人观点,简书系信息发布平台,仅提供信息存储服务。

推荐阅读更多精彩内容

  • 今天好累,头很疼,心里很烦。 坐在车里,头晕晕沉沉的。更新日志的时候感觉思维也很混乱。删了写,写了删,还是没有思路...
    sunny小虾米阅读 89评论 0 1
  • 今天的网络课刘老师讲了问题、症状、倾听、理解、共情。当孩子出现症状的时候,做事拖拉跟你对抗,家长往往做的就是...
    雪落无声_313b阅读 109评论 0 0
  • 救猫咪 一、编剧的起点:故事线 意外 强烈的心理影像 优秀的故事线会在你的脑海里绽放灵感······你能想象这部电...
    深蓝式阅读 790评论 0 0