Trapit Bansal∗
UMass Amherst
Jakub Pachocki
OpenAI
Szymon Sidor
OpenAI
Ilya Sutskever
OpenAI
Igor Mordatch
OpenAI
Abstract
Reinforcement learning algorithms can train agents that solve problems in complex,
interesting environments. Normally, the complexity of the trained agent is
closely related to the complexity of the environment. This suggests that a highly
capable agent requires a complex environment for training. In this paper, we point
out that a competitive multi-agent environment trained with self-play can produce
behaviors that are far more complex than the environment itself. We also point out
that such environments come with a natural curriculum, because for any skill level,
an environment full of agents of this level will have the right level of difficulty.
This work introduces several competitive multi-agent environments where agents
compete in a 3D world with simulated physics. The trained agents learn a
wide variety of complex and interesting skills, even though the environment
themselves are relatively simple. The skills include behaviors such as running,
blocking, ducking, tackling, fooling opponents, kicking, and defending using
both arms and legs. A highlight of the learned behaviors can be found here:
https://goo.gl/eR7fbX.
Emergent Complexity via Multi-Agent Competition
最后编辑于 :
©著作权归作者所有,转载或内容合作请联系作者
平台声明:文章内容(如有图片或视频亦包括在内)由作者上传并发布,文章内容仅代表作者本人观点,简书系信息发布平台,仅提供信息存储服务。
平台声明:文章内容(如有图片或视频亦包括在内)由作者上传并发布,文章内容仅代表作者本人观点,简书系信息发布平台,仅提供信息存储服务。