KAUST DepartmentKing Abdullah University of Science and Technology, , Saudi Arabia
Permanent link to this recordhttp://hdl.handle.net/10754/652989
MetadataShow full item record
AbstractDeep networks trained on demonstrations of human driving have learned to follow roads and avoid obstacles. However, driving policies trained via imitation learning cannot be controlled at test time. A vehicle trained end-to-end to imitate an expert cannot be guided to take a specific turn at an upcoming intersection. This limits the utility of such systems. We propose to condition imitation learning on high-level command input. At test time, the learned driving policy functions as a chauffeur that handles sensorimotor coordination but continues to respond to navigational commands. We evaluate different architectures for conditional imitation learning in vision-based driving. We conduct experiments in realistic three-dimensional simulations of urban driving and on a 1/5 scale robotic truck that is trained to drive in a residential area. Both systems drive based on visual input yet remain responsive to high-level navigational commands.
CitationCodevilla F, Miiller M, Lopez A, Koltun V, Dosovitskiy A (2018) End-to-End Driving Via Conditional Imitation Learning. 2018 IEEE International Conference on Robotics and Automation (ICRA). Available: http://dx.doi.org/10.1109/ICRA.2018.8460487.
SponsorsAntonio M. Lopez and Felipe Codevilla acknowledge the Spanish project TIN2017-88709-R (Ministerio de Economia, Industria y Competitividad) and the Spanish DGT project SPIP2017-02237, the Generalitat de Catalunya CERCA Program and its ACCIO agency. Felipe Codevilla was supported in part by FI grant 2017FI-B1-00162. Antonio and Felipe also thank German Ros who proposed to investigate the benefits of introducing route commands into the end-to-end driving paradigm during his time at CVC.
Conference/Event name2018 IEEE International Conference on Robotics and Automation, ICRA 2018