【特征学习】【无监督】【教程】EECS 598 Unsupervised Feature Learning

EECS 598 Unsupervised Feature Learning

Instructor: Prof. Honglak Lee

Instructor webpage:http://www.eecs.umich.edu/~honglak/

Office hours: Th 5pm-6pm, 3773 CSE

Classroom: 1690 CSE

Time: M W 10:30am-12pm

Course Schedule

(Note: this schedule is subject to change.)

DateTopicPapersPresenter

9/8IntroductionHonglak

9/13Sparse codingB. Olshausen, D. Field. Emergence of Simple-Cell Receptive Field Properties by Learning a Sparse Code for Natural Images. Nature, 1996.

H. Lee, A. Battle, R. Raina, and A. Y. Ng. Efficient sparse coding algorithms. NIPS, 2007.Honglak

9/15Self-taught learning

Application: computer visionR. Raina, A. Battle, H. Lee, B. Packer, and A. Y. Ng. Self-taught learning: Transfer learning from unlabeled data. ICML, 2007.

H. Lee, R. Raina, A. Teichman, and A. Y. Ng. Exponential Family Sparse Coding with Application to Self-taught Learning. IJCAI, 2009.

J. Yang, K. Yu, Y. Gong, and T. Huang. Linear Spatial Pyramid Matching Using Sparse Coding for Image Classification. CVPR, 2009.Honglak

9/20Neural networks and deep architectures IY. Bengio. Learning Deep Architectures for AI, Foundations and Trends in Machine Learning, 2009.Chapter 4.

Y. Bengio, P. Lamblin, D. Popovici, and H. Larochelle. Greedy layer-wise training of deep networks. NIPS, 2007.Deepak

9/22Restricted Boltzmann machineY. Bengio. Learning Deep Architectures for AI, Foundations and Trends in Machine Learning, 2009.Chapter 5.Byung-soo

9/27Variants of RBMs and AutoencodersP. Vincent, H. Larochelle, Y. Bengio, and P. Manzagol. Extracting and composing robust features with denoising autoencoders. ICML, 2008.

H. Lee, C. Ekanadham, and A. Y. Ng. Sparse deep belief net model for visual area V2. NIPS, 2008.Chun-Yuen

9/29Deep belief networksY. Bengio. Learning Deep Architectures for AI, Foundations and Trends in Machine Learning, 2009.Chapter 6.

R. Salakhutdinov, PhD Thesis.Chapter 2Anna

10/4Convolutional deep belief networksH. Lee, R. Grosse, R. Ranganath, and A. Y. Ng. Convolutional deep belief networks for scalable unsupervised learning of hierarchical representations. ICML, 2009.Min-Yian

10/6Application: audioH. Lee, Y. Largman, P. Pham, and A. Y. Ng. Unsupervised feature learning for audio classification using convolutional deep belief networks. NIPS, 2009.

A. R. Mohamed, G. Dahl, and G. E. Hinton, Deep belief networks for phone recognition. NIPS 2009 workshop on deep learning for speech recognition.Yash

10/11Factorized models IM. Ranzato, A. Krizhevsky, G. E. Hinton, Factored 3-Way Restricted Boltzmann Machines for Modeling Natural Images. AISTATS, 2010.Chun

10/13Factorized models IIM. Ranzato, G. E. Hinton. Modeling Pixel Means and Covariances Using Factorized Third-Order Boltzmann Machines. CVPR, 2010.Soonam

10/18No class - study break

10/20Project proposal presentations

10/25Temporal modeling IG. Taylor, G. E. Hinton, and S. Roweis. Modeling Human Motion Using Binary Latent Variables. NIPS, 2007.

G. Taylor and G. E. Hinton. Factored Conditional Restricted Boltzmann Machines for Modeling Motion Style. ICML, 2009.Jeshua

10/27Temporal modeling IIG. Taylor, R. Fergus, Y. LeCun and C. Bregler. Convolutional Learning of Spatio-temporal Features. ECCV, 2010.Robert

11/1Energy-based modelsK. Kavukcuoglu, M. Ranzato, R. Fergus, and Y. LeCun, Learning Invariant Features through Topographic Filter Maps. CVPR, 2009.

K. Kavukcuoglu, M. Ranzato, and Y. LeCun, Fast Inference in Sparse Coding Algorithms with Applications to Object Recognition. CBLL-TR-2008-12-01, 2008.Ryan

11/3Pooling and invarianceK. Jarrett, K. Kavukcuoglu, M. Ranzato, and Y. LeCun, What is the Best Multi-Stage Architecture for Object Recognition? ICML, 2009.Min-Yian

11/8Evaluating RBMsR. Salakhutdinov and I. Murray. On the Quantitative Analysis of Deep Belief Networks. ICML, 2008.

R. Salakhutdinov, PhD Thesis.Chapter 4Jeshua

11/10Deep Boltzmann machinesR. Salakhutdinov and G. E. Hinton. Deep Boltzmann machines. AISTATS, 2009.Dae Yon

11/15Local coordinate codingK. Yu, T. Zhang, and Y. Gong. Nonlinear Learning using Local Coordinate Coding, NIPS, 2009.

J. Wang, J. Yang, K. Yu, F. Lv, T. Huang, and Y. Gong. Learning Locality-constrained Linear Coding for Image Classification. CVPR, 2010.Robert

11/17Deep architectures IIH. Larochelle, Y. Bengio, J. Louradour and P. Lamblin, Exploring Strategies for Training Deep Neural Networks, JMLR, 2009.Soonam

11/22Deep architectures IIID. Erhan, Y. Bengio, A. Courville, P.-A. Manzagol, P. Vincent and S. Bengio, Why Does Unsupervised Pre-training Help Deep Learning? JMLR, 2010.Chun

11/24Application: computer vision IIJ. Yang, K. Yu, and T. Huang. Supervised Translation-Invariant Sparse Coding. CVPR, 2010.

Y. Boureau, F. Bach, Y. LeCun and J. Ponce: Learning Mid-Level Features for Recognition. CVPR, 2010.Dae Yon

11/29Pooling and invariance III. J. Goodfellow, Q. V. Le, A. M. Saxe, H. Lee, and A. Y. Ng. Measuring invariances in deep networks. NIPS, 2009.

Y. Boureau, J. Ponce, Y. LeCun, A theoretical analysis of feature pooling in vision algorithms. ICML, 2010.Anna

12/1Application: natural language processingR. Collobert and J. Weston. A unified architecture for natural language processing: Deep neural networks with multitask learning. ICML, 2009.Guanyu

12/13Project presentations I

12/15Project presentations II

12/19Final project report due

最后编辑于
©著作权归作者所有,转载或内容合作请联系作者
平台声明:文章内容(如有图片或视频亦包括在内)由作者上传并发布,文章内容仅代表作者本人观点,简书系信息发布平台,仅提供信息存储服务。

推荐阅读更多精彩内容

  • Last modified on December 18, 2014, at 9: 11 am. ----来自d...
    周筱鲁阅读 1,458评论 0 0
  • 早上,我还没起床,就听见外面有滴答,滴答的声音。我喜欢这种声音。我揉了揉眼睛,站了起来,趴在了窗台上,只见天暗暗的...
    深夜静思阅读 357评论 0 0
  • 针对状态栏的操作,只针对4.4kitKat(含)以上的机型,部分国产rom会失效,目前发现的有华为的EMUI Ac...
    紫阚阅读 4,626评论 5 18
  • 一、个人演练(命令行)1.进入到工作目录中,初始化一个代码仓库git init 2.给改git仓库配置一个用户名和...
    March_Cullen阅读 168评论 0 1
  • 今日日志 早上八点起床,上午的时间主要用在了收拾房间,昨天跟蓝朵说好带她来参观下我的小窝。中午十二点到二点多和同学...
    YaoYiLin阅读 151评论 0 1