Alexander Novikov,Dmitry Podoprikhin,Anton Osokin,Dmitry Vetrov
(Submitted on 22 Sep 2015) 代码
Deep neural networks currently demonstrate state-of-the-art performance in several domains. At the same time, models of this class are very demanding in terms of computational resources. In particular, a large amount of memory is required by commonly used fully-connected layers, making it hard to use the models on low-end devices and stopping the further increase of the model size. In this paper we convert the dense weight matrices of the fully-connected layers to the Tensor Train format such that the number of parameters is reduced by a huge factor and at the same time the expressive power of the layer is preserved. In particular, for the Very Deep VGG networks we report the compression factor of the dense weight matrix of a fully-connected layer up to 200000 times leading to the compression factor of the whole network up to 7 times.
Subjects:Learning (cs.LG); Neural and Evolutionary Computing (cs.NE)
Cite as:arXiv:1509.06569[cs.LG]
(orarXiv:1509.06569v1[cs.LG]for this version)
Download: