Anonymous authors
Paper under double-blind review
ABSTRACT
We present Value Propagation (VProp), a parameter-efficient differentiable planning
module built on Value Iteration which can successfully be trained in a reinforcement
learning fashion to solve unseen tasks, has the capability to generalize to
larger map sizes, and can learn to navigate in dynamic environments. We evaluate
on configurations of MazeBase grid-worlds, with randomly generated environments
of several different sizes. Furthermore, we show that the module and its variants
provide a simple way to learn to plan when adversarial agents are present and
the environment is stochastic, providing a cost-efficient learning system to build
low-level size-invariant planners for a variety of interactive navigation problems.
VALUE PROPAGATION NETWORKS
最后编辑于 :
©著作权归作者所有,转载或内容合作请联系作者
【社区内容提示】社区部分内容疑似由AI辅助生成,浏览时请结合常识与多方信息审慎甄别。
平台声明:文章内容(如有图片或视频亦包括在内)由作者上传并发布,文章内容仅代表作者本人观点,简书系信息发布平台,仅提供信息存储服务。
【社区内容提示】社区部分内容疑似由AI辅助生成,浏览时请结合常识与多方信息审慎甄别。
平台声明:文章内容(如有图片或视频亦包括在内)由作者上传并发布,文章内容仅代表作者本人观点,简书系信息发布平台,仅提供信息存储服务。