Efficient Processing of Deep Learning (DNN)

This is a collection of papers and projects that interest me



==================================

---------------    -------------------
Efficient Processing of Deep Neural Networks: A Tutorial and Survey

Efficient methods and hardware for deep learning(论文解析)

----- 量化  -------------------
IEEE 754,  fp16,fp32,fp64,int8,int4

----- Compressing and Pruning  -------------------

----- Matrix  -------------------
wiki:
Multiplication Algorithm
Matrix Multiplication Algorithm
divide and conquer algorithm, sub-cubic algorithm, strassen algorithm, coppersmith-winograd algorithm;

The Matrix Calculus You Need For Deep Learning

Matrix Computation(Golub)

----- Optimization  -------------------
Optimization Methods for Large-Scale Machine Learning (pdf)


GEMM(General matrix multiplication)
基本数学库:
Basic Linear Algebra Subprograms

----- 并行计算  -------------------

----- Baysian  -------------------
Novak,  Baysian deep convolutional networks with many channels are gaussian process

----- 统计学习 ---------
(统计学习精要(The Elements of Statistical Learning)课堂笔记)


----- Net ---------
LENET-5, 1986, minist
AlexNet, 2012, ImagNet
GoogleNet, 2014
VGGNet, 2014
ResNet ( ResNet2015, Wide ResNet, ResNetX )
DenseNet, 2016
MobileNet, 2017
ShuffleNet, 2017

超分
SRCNN, FSRCNN, FSRCNN-s, ESPCN, VDSR

----- 强化学习 ---------

----- 对抗学习 ---------

----- 迁移学习 ---------
(迁移学习简明手册)

----- 演化学习 ---------

----- Tutorial and Survey ---------
Tutorial on Hardware Accelerators for Deep Neural Networks

-------------- Staffs --------------
fengbintu



===== 产业界 ===============
CPU, FPGA, DSP, GPU, ASIC

----- CUDA ---------



===== 应用领域 ===============

----- 图像分类 ---------
Image Classification

----- 目标检测 ---------
Object Detection

----- 自然语言处理 ---------
Natural Language Processing



===== 基础知识 ===============

----- 机器学习 ---------

吴恩达 Ng

计算机视觉与卷积神经网络基础, standford cs231n

Learning Semantic Image Representations at a Large scale)by Jia Yangqing
Model Evaluation, Model Selection, and Algorithm Selection in Machine Learning(Sebastian Raschka)

----- CNN ---------
卷积 convolution
激活 activation function
池化 pooling
全联接 Full connect / softmax
BP, backpropagation
目标函数与梯度下降函数(BGD, SGD, RMSprop, Adam)简介1 
超参数, 学习率,
范数规则化, 过拟合, Occam's razor, L0/L1/L2/nuclear norm
Low Rank
鲁棒PCA(robust pca), 背景建模,变换不变低秩纹理TILT

最后编辑于
©著作权归作者所有,转载或内容合作请联系作者
平台声明:文章内容(如有图片或视频亦包括在内)由作者上传并发布,文章内容仅代表作者本人观点,简书系信息发布平台,仅提供信息存储服务。

推荐阅读更多精彩内容