"常见机器学习算法实现代码(DeepLearning Tutorials/PCA/kNN/logistic regression/Manifold Learning/SVM/GMM/Decision Tree/K-Means/Naive Bayes)" by wepon GitHub:O网页链接
【人工智能】斯坦福Andrej Karpathy新博文,用深度强化学习玩Pong(Deep Reinforcement Learning: Pong from Pixels):O网页链接 及130行Python代码:O网页链接 更多信息参见博文吧。 L秒拍视频
《深度学习最新梳理:OpenAI研究骨干博士论文》 O深度学习最新梳理:OpenAI研究骨干博士论文(...(PhD Thesis)《Learning Algorithms from Data》Wojciech Zaremba [New York University] (2016) O网页链接 GitHub:O网页链接 O网页链接 O网页链接
"Deep Learning and Machine Learning mini-projects" by Dan Dixey GitHub:O网页链接
【机器人、深度学习】Fido项目:O网页链接 是一个轻量级的,开源的,并且高度模块化的C ++学习机库。该库面向于嵌入式电子和机器人。Fido,包括训练的神经网络,强化学习方法,遗传算法,以及一个完整的机器人仿真器。Github:O网页链接
今天我们来看一篇综述,Topic Model之父David Blei长期以来鼓推Variational Inference。那么,这个方向究竟是干什么呢?而且在David看来,这个方向又有什么Open Questions呢?我给大家总结了一下。 °【论文每日读】Variational Inference: A Rev...
《SNN: Stacked Neural Networks》M Mohammadi, S Das [Stanford University] (2016) O网页链接S-NN: Stacked Neural Networks O网页链接 论文很有意思: 拿pretrained NNs (如VGG, GoogLeNet, Places, NIN)的features组合再去训练一个SVM达到transfer learning。单个任务fine-tuned networks准确性好,但是泛化会差。多个任务的joint finetuning可以兼顾准确性和泛化能力。
Hierarchical Variational Models, ICML'16 O网页链接 Edward code: O网页链接*
Extreme Stochastic Variational Inference: Distributed and Asynchronous, NIPS 2016 O网页链接 Ref: F+Nomad LDA: A Scalable
Asynchronous Distributed Algorithm for Topic Modeling WWW 2015 O网页链接 sample & update O(log T)
A Neural Autoregressive Approach to Collaborative Filtering, ICML'16 O网页链接Restricted Boltzmann Machines for Collaborative Filtering, ICML 2007 O网页链接 The Neural Autoregressive Distribution Estimator, AISTATS 2011 O网页链接
【TensorFlow – Consise Examples for Beginners】OGitHub - aymericdamien/TensorFlow-Examples... 初学者TensorFlow教程和代码示例,包含了流行的机器学习算法。
"常见机器学习算法实现代码(DeepLearning Tutorials/PCA/kNN/logistic regression/Manifold Learning/SVM/GMM/Decision Tree/K-Means/Naive Bayes)" by wepon GitHub:O网页链接
深度学习课程笔记:O网页链接。涵盖数学基础、神经网络、CNN训练、CNN应用等等
Fast Memory-efficient Anomaly Detection in Streaming Heterogeneous Graphs
Ranking Causal Anomalies via Temporal and Dynamical Analysis on Vanishing Correlations
Author(s): Wei Cheng*, NEC Labs America; Kai Zhang, NEC labs America; Haifeng Chen, NEC Research Lab; Guofei Jiang, NEC labs America; Wei Wang, UC Los Angeles
Accelerating Online CP Decompositions for Higher Order Tensors
Author(s): Shuo Zhou*, University of melbourne; Nguyen Vinh, University of Melbourne; James Bailey, ; Yunzhe Jia, University of Melbourne; Ian Davidson, University of California-Davis
Recurrent Temporal Point Process
Author(s): NAN DU*, GEORGIA TECH; Hanjun Dai, ; Rakshit Trivedi, ; Utkarsh Upadhyay, Max Plank Institute; Manuel Gomez-Rodriguez, MPI-SWS; Le Song,
Hierarchical Incomplete Multi-source Feature Learning for Spatiotemporal Event Forecasting
Author(s): Liang Zhao*, VT; Jieping Ye, University of Michigan at Ann Arbor; Feng Chen, SUNY Albany; Chang-Tien Lu, Virginia Tech; Naren Ramakrishnan, Virginia Tech
Fast Unsupervised Online Drift Detection
Author(s): Denis Dos Reis*, Universidade de São Paulo; Gustavo Batista, Universidade de Sao Paulo at Sao Carlos; Peter Flach, University of Bristol; Stan Matwin, Dalhousie University
Stochastic Optimization Techniques for Quantification Performance Measures
Author(s): Harikrishna Narasimhan, IACS, Harvard University; Shuai Li, University of Insubria; Purushottam Kar*, IIT Kanpur; Sanjay Chawla, QCRI-HBKU, Qatar; Fabrizio Sebastiani, QCRI-HBKU, Qatar
Transferring Knowledge between Cities: A Perspective of Multimodal Data and A Case Study in Air Qual
Author(s): Ying Wei*, Hong Kong Univ. of Sci. & Tech; Yu Zheng, Microsoft Research; Qiang Yang, HKUST
FLASH: Fast Bayesian Optimization for Data Analytic Pipelines
Author(s): Yuyu Zhang*, Georgia Institute of Technolog; Mohammad Bahadori, Georgia Institute of Technology; Hang Su, Georgia Institute of Technology; Jimeng Sun, Georgia Institute of Technology
Efficient Shift-Invariant Dictionary Learning
Author(s): Guoqing Zheng*, Carnegie Mellon University; Yiming Yang, ; Jaime Carbonell,
Accelerated Stochastic Block Coordinate Descent with Optimal Sampling
Author(s): Aston Zhang*, UIUC; Quanquan Gu, University of Virginia
Author(s): Emaad Manzoor, Stony Brook University; Leman Akoglu*, SUNY Stony Brook
#LIBMF# O网页链接 LIBMF: A Library for Parallel MF JMLR'16 O网页链接 A Fast Parallel Stochastic Gradient Method for MF TIST'15 O网页链接 A Learning-rate Schedule for Stochastic Gradient Methods to MF PAKDD'15 O网页链接
Hydra:Distributed Coordinate Descent Method for Learning with Big Data, JMLR'16 O网页链接
Latent Space Inference of Internet-Scale Networks, JMLR'16 O网页链接 【 Data: Triangular Motif Representation of Networks】【Model: Scalable Network Model (STM) Based on Triangular Motifs】【Inference: Stochastic Variational Inference (SVI) for STM】【Distributed Imp】
《One-shot Learning with Memory-Augmented Neural Networks》A Santoro, S Bartunov, M Botvinick, D Wierstra, T Lillicrap [Google DeepMind] (2016) O网页链接Theano implementation by Tristan Deleu O网页链接 //@爱可可-爱生活: Summary by Hugo Larochelle O网页链接
Summary by Hugo Larochelle O网页链接
"Q-Learning and DQN Reinforcement Learning to play the Helicopter Game - Keras based" by Dan Dixey O网页链接 GitHub:O网页链接
《Density estimation using Real NVP》L Dinh, J Sohl-Dickstein, S Bengio [University of Montreal & Google Brain] (2016) O网页链接 Real NVP visual results:O网页链接
《A First Course in Machine Learning, Second Edition》by Simon Rogers, Mark Girolami (CRC 2016) O网页链接
《Boosting Performance of Machine Learning Models》by Ved Prakash O网页链接
《Deep Reinforcement Learning: Pong from Pixels》by Andrej Karpathy O网页链接
《Deep Learning Trends @ ICLR 2016》by Tomasz Malisiewicz O网页链接
《Asynchrony begets Momentum, with an Application to Deep Learning》I Mitliagkas, C Zhang, S Hadjis, C Ré [Stanford University] (2016) O网页链接
【深度学习】谷歌人工智能深度大脑DeepMind最新研究:用记忆增强神经网络改善One-shot Learning(One-shot Learning with Memory-Augmented Neural Networks)。 代码:O网页链接 论文:O网页链接 数据集:O网页链接
论文概要(机器之心编译):尽管深度神经网络的应用在近期取得了诸多突破,但「one-shot」学习却是一项持续性挑战。传统的基于梯度的网络需要大量数据去学习,需要经过大量反复的训练。当遇到新的数据时,模型必须非常低效率的重新学习它们参数,以便在杜绝 catastrophic interference 下的情况下充分的将新信息包含进来。一些具有增强记忆功能的架构,比如说神经图灵机(Neural Turing Machines, NTMs),提供了快速编码和读取新信息的能力,因此能够有可能避免常规模型的下降趋势。在这篇论文中,我们证明了记忆增强神经网络(MNNNs)可以快速吸收新数据,并且仅利用少数几个例子就可以从从数据中做出准确预测。我们同时介绍了一种新的方法,来获取一种关注存储内容的外部存储,这区别于之前的模型,它们需要额外使用基于位置的存储来关注机制。
许多重要的学习问题需要具备相应的能力来从少量数据中进行有效推断,快速且明智对新信息进行调整。这些问题对深度学习提出了挑战,深度学习目前依赖于缓慢的、增量的参数变化。我们基于「元学习」(meta-learning)的理念提出了一种方法。在这种方法中,渐进的增量虚席能够对跨越任务的背景知识进行编码,同时一个更加灵活的存储资源能够将信息和特定的新任务捆绑在一起。我们的核心贡献是证明了一种用于元学习的 MANNs 的特殊效用。这是包含一种专用的可寻址存储器的深度学习架构。检验结果显示,在分类和回归任务等两个元学习任务中,只使用稀疏的训练数据,MANN 的表现要优于长短时记忆模型(LSTM)。
《Asynchrony begets Momentum, with an Application to Deep Learning》I Mitliagkas, C Zhang, S Hadjis, C Ré [Stanford University] (2016) O网页链接
《Deep Learning Trends @ ICLR 2016》by Tomasz Malisiewicz O网页链接
"tensorflow之路-解读textCNN"。最近在研究tensorflow,结合YoonKim的论文和wildml给出的tensorflow实现,自己写了一下用tensorflow实现cnn文本分类的粗浅理解O网页链接。不知道老师会不会帮忙转发呢
'Setting up a Deep Learning Machine from Scratch (Software)' by Sai Soundararaj GitHub: O网页链接译文《深度学习指南:基于Ubuntu从头开始搭建环境》 O网页链接
《A Biased Debrief of the (RE•WORK) Boston Deep Learning Conference》by Dan Kuster O网页链接
《Deep Reinforcement Learning: Pong from Pixels》by Andrej Karpathy O网页链接
《A First Course in Machine Learning, Second Edition》by Simon Rogers, Mark Girolami (CRC 2016) O网页链接
【LSTM】"Creative autocomplete in your browser, write collaboratively with an LSTM" by Ross Goodwin O网页链接
《Boosting Performance of Machine Learning Models》by Ved Prakash O网页链接