Zero-Shot Transfer Learning of a Throwing Task via Domain Randomization

Abstract: Deep reinforcement learning (DRL) on continuous robot control has received a wide range of interests over the last decade. Collecting data directly from real robots results in high sample complexities and can cause safety accidents, so simulators are widely used as efficient alternatives for real robots. Unfortunately, policies trained in the simulation cannot be directly transferred to real-world robots due to a mismatch between the simulation and the reality, which is referred to as ‘reality gap’. To close this gap, domain randomization (DR) is commonly used. DR guarantees better transferability in the zero-shot setting, i.e. training agents in the source domain and testing them on the previously unseen target domain without fine-tuning. In this work, we identify the positive influence of DR on zero-shot transfer between the sim-to-sim setting with an object throwing task.

Bibtex

@inproceedings{park2020zero,
  title={Zero-shot transfer learning of a throwing task via domain randomization},
  author={Park, Sungyong and Kim, Jigang and Kim, H Jin},
  booktitle={2020 20th International Conference on Control, Automation and Systems (ICCAS)},
  pages={1026--1030},
  year={2020},
  organization={IEEE}
}