Skip to yearly menu bar Skip to main content


Poster

Reinforcement Learning via Auxillary Task Distillation

Abhinav Narayan Harish · Larry Heck · Josiah P Hanna · Zsolt Kira · Andrew Szot

# 29
[ ] [ Paper PDF ]
Wed 2 Oct 1:30 a.m. PDT — 3:30 a.m. PDT

Abstract:

We present Reinforcement Learning via Auxiliary Task Distillation (AuxDistill); a new method for leveraging reinforcement learning (RL) in long-horizon robotic control problems by distilling behaviors from auxiliary RL tasks. AuxDistill trains pixels-to-actions policies end-to-end with RL, without demonstrations, a learning curriculum, or pre-trained skills. AuxDistill achieves this by concurrently doing multi-task RL in auxiliary tasks which are easier than and relevant to the main task. Behaviors learned in the auxiliary tasks are transferred to solving the main task through a weighted distillation loss. In an embodied object-rearrangement task, we show AuxDistill achieves 27% higher success rate than baselines.

Live content is unavailable. Log in and register to view live content