LEARNING TO UNDERSTAND GOAL SPECIFICATIONS
BY MODELLING REWARD
Dzmitry Bahdanau∗
MILA
University of Montreal
Montreal, Canada
dimabgv@gmail.com
Felix Hill
DeepMind
felixhill@google.com
Jan Leike
DeepMind
leike@google.com
Edward Hughes
DeepMind
edwardhughes@google.com
Pushmeet Kohli
DeepMind
pushmeet@google.com
Edward Grefenstette
DeepMind
etg@google.com
ABSTRACT
Recent work has shown that deep reinforcement-learning agents can learn to follow
language-like instructions from infrequent environment rewards. However, this
places on environment designers the onus of designing language-conditional reward
functions which may not be easily or tractably implemented as the complexity of
the environment and the language scales. To overcome this limitation, we present
a framework within which instruction-conditional RL agents are trained using
rewards obtained not from the environment, but from reward models which are
jointly trained from expert examples. As reward models improve, they learn to
accurately reward agents for completing tasks for environment configurations—and
for instructions—not present amongst the expert data. This framework effectively
separates the representation of what instructions require from how they can be
executed. In a simple grid world, it enables an agent to learn a range of commands
requiring interaction with blocks and understanding of spatial relations and underspecified
abstract arrangements. We further show the method allows our agent to
adapt to changes in the environment without requiring new expert examples.
LEARNING TO UNDERSTAND GOAL SPECIFICATIONS BY MODELLING REWARD
©著作权归作者所有,转载或内容合作请联系作者
- 文/潘晓璐 我一进店门,熙熙楼的掌柜王于贵愁眉苦脸地迎上来,“玉大人,你说我怎么就摊上这事。” “怎么了?”我有些...
- 文/花漫 我一把揭开白布。 她就那样静静地躺着,像睡着了一般。 火红的嫁衣衬着肌肤如雪。 梳的纹丝不乱的头发上,一...
- 文/苍兰香墨 我猛地睁开眼,长吁一口气:“原来是场噩梦啊……” “哼!你这毒妇竟也来了?” 一声冷哼从身侧响起,我...