风格迁移模型压缩 第一期

        第五周马上又要结束了,这一周安排的任务是风格迁移模型的压缩和部署。部署暂时是打算用TensorFlow Lite之类的库,但安卓的学习进度有点落下,所以部署的部分延迟到之后再做,暂时先完成模型压缩的任务。

        一开始对模型压缩没有什么思路,网上搜索得到的方法有很多,例如参数修剪和共享、低秩分解、迁移/压缩卷积滤波器和知识精炼等,但仅仅知道了一个概念根本无从下手。于是去仔细地读了读风格迁移的综述论文:Neural Style Transfer: A Review。期间,注意到其作者Yongcheng Jing在知乎专栏的文章中提到,他在淘宝AI Team的黄真川的帮助下,将TF模型压缩到了0.99M,于是在文章下评论询问模型优化压缩的方法,Yongcheng Jing的回答是:使用更小的Kernel

        由于没有找到他说的具体模型架构,只能自己动手调整。如下是图像生成网络的伪代码,由3层conv、5层residual block以及3层deconv组成,其中参数主要集中于residual block中。

def net(image, training):
    conv1 = relu(instance_norm(conv2d(image, 3, 32, 9, 1)))
    conv2 = relu(instance_norm(conv2d(conv1, 32, 64, 3, 2)))
    conv3 = relu(instance_norm(conv2d(conv2, 64, 128, 3, 2)))
    res1 = residual(conv3, 128, 3, 1)
    res2 = residual(res1, 128, 3, 1)
    res3 = residual(res2, 128, 3, 1)
    res4 = residual(res3, 128, 3, 1)
    res5 = residual(res4, 128, 3, 1)
    deconv1 = relu(instance_norm(resize_conv2d(res5, 128, 64, 3, 2, training)))
    deconv2 = relu(instance_norm(resize_conv2d(deconv1, 64, 32, 3, 2, training)))
    deconv3 = tf.nn.tanh(instance_norm(conv2d(deconv2, 32, 3, 9, 1)))

        列出每个Tensor对象的具体信息:

<tf.Variable 'conv1/conv/weight:0' shape=(9, 9, 3, 32) dtype=float32_ref>
<tf.Variable 'conv2/conv/weight:0' shape=(3, 3, 32, 64) dtype=float32_ref>
<tf.Variable 'conv3/conv/weight:0' shape=(3, 3, 64, 128) dtype=float32_ref>
<tf.Variable 'res1/residual/conv/weight:0' shape=(3, 3, 128, 128) dtype=float32_ref>
<tf.Variable 'res1/residual/conv_1/weight:0' shape=(3, 3, 128, 128) dtype=float32_ref>
<tf.Variable 'res2/residual/conv/weight:0' shape=(3, 3, 128, 128) dtype=float32_ref>
<tf.Variable 'res2/residual/conv_1/weight:0' shape=(3, 3, 128, 128) dtype=float32_ref>
<tf.Variable 'res3/residual/conv/weight:0' shape=(3, 3, 128, 128) dtype=float32_ref>
<tf.Variable 'res3/residual/conv_1/weight:0' shape=(3, 3, 128, 128) dtype=float32_ref>
<tf.Variable 'res4/residual/conv/weight:0' shape=(3, 3, 128, 128) dtype=float32_ref>
<tf.Variable 'res4/residual/conv_1/weight:0' shape=(3, 3, 128, 128) dtype=float32_ref>
<tf.Variable 'res5/residual/conv/weight:0' shape=(3, 3, 128, 128) dtype=float32_ref>
<tf.Variable 'res5/residual/conv_1/weight:0' shape=(3, 3, 128, 128) dtype=float32_ref>
<tf.Variable 'deconv1/conv_transpose/conv/weight:0' shape=(3, 3, 128, 64) dtype=float32_ref>
<tf.Variable 'deconv2/conv_transpose/conv/weight:0' shape=(3, 3, 64, 32) dtype=float32_ref>
<tf.Variable 'deconv3/conv/weight:0' shape=(9, 9, 32, 3) dtype=float32_ref>
(带有Adam的参数未显示)

        原始模型由3部分组成,分别是data(20.1MB)、index(2.5KB)、meta(5.7MB),其中data保存了神经网络中所有的变量值,而meta中保存了网络的结构和变量名,压缩模型时的目标是将data缩小。于是,将中间5个residual block的kernel从3x3改为1x1,再统计每一层的参数(忽略带有Adam的参数),得到如下数据:

layer name attribute(before) attribute(after) reduce
conv1 7776 7776 0%
conv2 18432 18432 0%
conv3 73728 73728 0%
res1 294912 32768 88.9%
res2 294912 32768 88.9%
res3 294912 32768 88.9%
res4 294912 32768 88.9%
res5 294912 32768 88.9%
deconv1 73728 73728 0%
deconv2 18432 18432 0%
deconv3 7776 7776 0%
total 1674432 363712 78.3%

        从结果可见,模型的参数减少了78.3%,而实验的结果是,模型从20.1MB减小到了4.4MB,符合之前的计算结果。同时还发现,模型的训练时间减少了,只需1-2个小时即可将total loss收敛到20w。

total_loss.png
©著作权归作者所有,转载或内容合作请联系作者
【社区内容提示】社区部分内容疑似由AI辅助生成,浏览时请结合常识与多方信息审慎甄别。
平台声明:文章内容(如有图片或视频亦包括在内)由作者上传并发布,文章内容仅代表作者本人观点,简书系信息发布平台,仅提供信息存储服务。

相关阅读更多精彩内容

友情链接更多精彩内容