240 发简信
IP属地:湖南
  • 深度学习调参技巧

    从粗到细 实践中,一般先进行初步范围搜索,然后根据好结果出现的地方,再缩小范围进行更精细的搜索。 先参考相关论文,以论文中给出的参数作为初始参数...

  • Knowledge transfer summary-2

    所谓“知识精炼”我的理解就是将一个训练好的复杂模型的“知识”迁移到一个结构更为简单的网络中,或者通过简单的网络去学习复杂模型中“知识”或模仿复杂...

  • 网络量化小结

    网络量化作为一种重要的模型压缩方法,大致可以分为两类: 直接降低参数精度典型的工作包括二值网络,三值网络以及XNOR-Net. HORQ和Net...

  • Speeding-up CNN using CP-Decomposition

    Approach We propose a simple two-step approach for speeding up convoluti...

  • Resize,w 360,h 240
    Exploiting Linear Structure Within CNN

    Approach Matrix Decomposition Higher Order Tensor Approximations Monochr...

  • Resize,w 360,h 240
    Dynamic Network Surgery

    Approach Han song recently propose to compress DNN by deleting unimporta...

  • Resize,w 360,h 240
    Speed up CNN with Low Rank Expansions

    Approach Experiment References:Speeding up Convolutional Neural Networks...

  • Resize,w 360,h 240
    Learning Structured Sparsity in DNN

    Approach The optimization target of learning the filter-wise and channel...

  • Resize,w 360,h 240
    Fixed-point Factorized Networks

    Approach Fixed-point Factorization Full-precision Weights Recovery The q...