Learning Path Tracking for Real Car-like Mobile Robots From Simulation

Danial Kamran, Junyi Zhu, and Martin Lauer
Institute of Measurement and Control Systems, Karlsruhe Institute of Technology (KIT), Germany

In this paper we propose a Reinforcement Learning (RL) algorithm for path tracking of a real car-like robot. The RL network is trained  in simulation and then evaluated on a small racing car without modification. We provide a big number of training data during off-line simulation using a random path generator to cover different curvatures and initial positions, headings and velocities of the vehicle for the RL agent. Comparing to similar RL based algorithms, we utilize Convolutional Neural Network (CNN) as image embedder for estimating useful information about current and future position of the vehicle relative to the path. Evaluations for running the trained agent on the real car show that the RL agent can control the car smoothly and reduce the velocity adaptively to follow a sample track. We also compared the proposed approach  with a conventional lateral controller and results show smoother maneuvers and smaller cross-track errors for the proposed algorithm.