Penghindaran Rintangan Otomatis Pada Agen Otonom Berbasis End-to-End Deep Imitation Learning
Abstract
This research addresses the issue of automatic obstacle avoidance during navigation by autonomous agents. Designing a traditional, programmed automatic obstacle avoidance system would be difficult and expensive. Therefore, a neural network-based approach is proposed, known as end-to-end deep imitation learning, where the approach is data-driven and thus relatively easier and more cost-effective compared to traditional methods. The research also proposes the architecture of a convolutional neural network design and image processing techniques for effective and efficient machine learning training. Testing is conducted on a path with randomly placed obstacles in the Webots simulator. Gradual performance evaluations demonstrate that the proposed architecture successfully trains autonomous agents to maneuver when encountering dynamic obstacles with a relatively small training dataset.
Downloads
References
T. de Swarte, O. Boufous, and P. Escalle, “Artificial intelligence, ethics and human values: the cases of military drones and companion robots,” Artif Life Robot, vol. 24, no. 3, pp. 291–296, Sep. 2019, doi: 10.1007/s10015-019-00525-1.
C. Wu et al., “UAV autonomous target search based on deep reinforcement learning in complex disaster scene,” IEEE Access, vol. 7, pp. 117227–117245, 2019, doi: 10.1109/ACCESS.2019.2933002.
J. Dai, R. Li, Z. Liu, and S. Lin, “Impacts of the introduction of autonomous taxi on travel behaviors of the experienced user: Evidence from a one-year paid taxi service in Guangzhou, China,” Transp Res Part C Emerg Technol, vol. 130, Sep. 2021, doi: 10.1016/j.trc.2021.103311.
X. Dai, Y. Mao, T. Huang, N. Qin, D. Huang, and Y. Li, “Automatic obstacle avoidance of quadrotor UAV via CNN-based learning,” Neurocomputing, vol. 402, pp. 346–358, Aug. 2020, doi: 10.1016/j.neucom.2020.04.020.
S. Back, G. Cho, J. Oh, X. T. Tran, and H. Oh, “Autonomous UAV Trail Navigation with Obstacle Avoidance Using Deep Neural Networks,” Journal of Intelligent and Robotic Systems: Theory and Applications, vol. 100, no. 3–4, pp. 1195–1211, Dec. 2020, doi: 10.1007/s10846-020-01254-5.
F. Ye, S. Zhang, P. Wang, and C. Y. Chan, “A survey of deep reinforcement learning algorithms for motion planning and control of autonomous vehicles,” in IEEE Intelligent Vehicles Symposium, Proceedings, Institute of Electrical and Electronics Engineers Inc., Jul. 2021, pp. 1073–1080. doi: 10.1109/IV48863.2021.9575880.
J. Chen, S. E. Li, and M. Tomizuka, “Interpretable End-to-End Urban Autonomous Driving With Latent Deep Reinforcement Learning,” IEEE Transactions on Intelligent Transportation Systems, vol. 23, no. 6, pp. 5068–5078, Jun. 2022, doi: 10.1109/TITS.2020.3046646.
A. Tampuu, T. Matiisen, M. Semikin, D. Fishman, and N. Muhammad, “A Survey of End-to-End Driving: Architectures and Training Methods,” IEEE Trans Neural Netw Learn Syst, vol. 33, no. 4, pp. 1364–1384, Apr. 2022, doi: 10.1109/TNNLS.2020.3043505.
R. Gutiérrez-Moreno, R. Barea, E. López-Guillén, J. Araluce, and L. M. Bergasa, “Reinforcement Learning-Based Autonomous Driving at Intersections in CARLA Simulator,” Sensors, vol. 22, no. 21, Nov. 2022, doi: 10.3390/s22218373.
B. Osiński et al., “CARLA Real Traffic Scenarios -- novel training ground and benchmark for autonomous driving,” Dec. 2020, [Online]. Available: http://arxiv.org/abs/2012.11329
X. Liang, Y. Liu, T. Chen, M. Liu, and Q. Yang, “Federated Transfer Reinforcement Learning for Autonomous Driving,” Oct. 2019, [Online]. Available: http://arxiv.org/abs/1910.06001
A. Kusari et al., “Enhancing SUMO simulator for simulation based testing and validation of autonomous vehicles,” in IEEE Intelligent Vehicles Symposium, Proceedings, Institute of Electrical and Electronics Engineers Inc., 2022, pp. 829–835. doi: 10.1109/IV51971.2022.9827241.
R. Gutiérrez-Moreno, R. Barea, E. López-Guillén, J. Araluce, and L. M. Bergasa, “Reinforcement Learning-Based Autonomous Driving at Intersections in CARLA Simulator,” Sensors, vol. 22, no. 21, Nov. 2022, doi: 10.3390/s22218373.
M. Franchi, “Webots.HPC: A Parallel Robotics Simulation Pipeline for Autonomous Vehicles on High Performance Computing,” Aug. 2021, [Online]. Available: http://arxiv.org/abs/2108.00485
D. H. Lee, K. L. Chen, K. H. Liou, C. L. Liu, and J. L. Liu, “Deep learning and control algorithms of direct perception for autonomous driving,” Applied Intelligence, vol. 51, no. 1, pp. 237–247, Jan. 2021, doi: 10.1007/s10489-020-01827-9.
E. Zhang, H. Zhou, Y. Ding, J. Zhao, and C. Ye, “Learning how to avoiding obstacles for end-to-end driving with conditional imitation learning,” in ACM International Conference Proceeding Series, Association for Computing Machinery, Nov. 2019, pp. 108–113. doi: 10.1145/3372806.3372808.
L. Anzalone, P. Barra, S. Barra, A. Castiglione, and M. Nappi, “An End-to-End Curriculum Learning Approach for Autonomous Driving Scenarios,” IEEE Transactions on Intelligent Transportation Systems, vol. 23, no. 10, pp. 19817–19826, Oct. 2022, doi: 10.1109/TITS.2022.3160673.
R. Hadsell, D. Rao, A. A. Rusu, and R. Pascanu, “Embracing Change: Continual Learning in Deep Neural Networks,” Trends in Cognitive Sciences, vol. 24, no. 12. Elsevier Ltd, pp. 1028–1040, Dec. 01, 2020. doi: 10.1016/j.tics.2020.09.004.