Contextual-MDPs for PAC-Reinforcement Learning with Rich Observations

https://128.84.21.199/pdf/1602.02722v1.pdf

We propose and study a new tractable model for reinforcement learning with high-dimensional observation called Contextual-MDPs, generalizing contextual bandits to a sequential decision making setting.

These models require an agent to take actions based on high-dimensional observations (features) with the goal of achieving long-term performance competitive with a large set of policies. Since the size of the observation space is a primary obstacle to sample-efficient learning, Contextual-MDPs are assumed to be summarizable by a small number of hidden states. In this setting, we design a new reinforcement learning algorithm that engages in global exploration while using a function class to approximate future performance.

We also establish a sample complexity guarantee for this algorithm, proving that it learns near optimal behavior after a number of episodes that is polynomial in all relevant parameters, logarithmic in the number of policies, and independent of the size of the observation space. This represents an exponential improvement on the sample complexity of all existing alternative approaches and provides theoretical justification for reinforcement learning with function approximation.

最后编辑于
©著作权归作者所有,转载或内容合作请联系作者
【社区内容提示】社区部分内容疑似由AI辅助生成,浏览时请结合常识与多方信息审慎甄别。
平台声明:文章内容(如有图片或视频亦包括在内)由作者上传并发布,文章内容仅代表作者本人观点,简书系信息发布平台,仅提供信息存储服务。

相关阅读更多精彩内容

友情链接更多精彩内容