Smart City Gnosys

Smart city article details

Title A Novel Hybrid Framework For Motion Planning In Autonomous Vehicles Using Reinforcement And Imitation Learning
ID_Doc 3396
Authors Kaur P.; Sobti R.
Year 2024
Published CINS 2024 - 2nd International Conference on Computational Intelligence and Network Systems
DOI http://dx.doi.org/10.1109/CINS63881.2024.10864402
Abstract Autonomous cars are considered to revolutionize the transportation sector significantly, increasing safety and accessibility, particularly within the smart city context. The current paper proposes a novel motion planning approach for self-driving cars and integrates both Deep Reinforcement Learning (DRL) and Deep Imitation Learning (DIL) approaches This proposed Hybrid DRL-DIL algorithm aspires to improve the decision-making capabilities of self-driving vehicles through the integration of Reinforcement Learning together with Imitation Learning strategies. The model has been trained in an urban traffic environment using the Robotics System Toolbox in MATLAB along with high-end simulation environments of Unity to create a realistic simulated urban driving scenario. Furthermore, the Deep Learning Toolbox in MATLAB is used for the implementation and training of both DRL and DIL models. The implemented model reduced the collision frequency by 35% and the lane-keeping precision by 20% compared to the existent rule-based models. ©2024 IEEE.
Author Keywords Autonomous driving; deep learning; imitation learning; machine learning; motion planning; reinforcement learning


Similar Articles


Id Similarity Authors Title Published
18049 View0.896Ashwin S.H.; Naveen Raj R.Deep Reinforcement Learning For Autonomous Vehicles: Lane Keep And Overtaking Scenarios With Collision AvoidanceInternational Journal of Information Technology (Singapore), 15, 7 (2023)
18032 View0.882Singh D.Deep Reinforcement Learning (Drl) For Real-Time Traffic Management In Smart Cities2023 International Conference on Communication, Security and Artificial Intelligence, ICCSAI 2023 (2023)
18064 View0.877Youssef F.; Houda B.Deep Reinforcement Learning With External Control: Self-Driving Car ApplicationACM International Conference Proceeding Series (2019)
18057 View0.871Mittal M.; Sehgal A.; Varshney N.; Kumar S.P.; Boob N.S.; Reddy R.A.Deep Reinforcement Learning For Optimizing Route Planning In Urban TrafficIEEE International Conference on "Computational, Communication and Information Technology", ICCCIT 2025 (2025)
2571 View0.868Crincoli G.; Fierro F.; Iadarola G.; La Rocca P.E.; Martinelli F.; Mercaldo F.; Santone A.A Method For Road Accident Prevention In Smart Cities Based On Deep Reinforcement LearningProceedings of the International Conference on Security and Cryptography, 1 (2022)
38098 View0.865Fereidooni Z.; Palesi L.A.I.; Nesi P.Multi-Agent Optimizing Traffic Light Signals Using Deep Reinforcement LearningIEEE Access, 13 (2025)
18053 View0.864Kansal V.; Shnain A.H.; Deepak A.; Rana A.; Manjunatha; Dixit K.K.; Rajkumar K.V.Deep Reinforcement Learning For Iot-Based Smart Traffic Management SystemsProceedings of International Conference on Contemporary Computing and Informatics, IC3I 2024 (2024)
26704 View0.861Wu C.; Kreidieh A.R.; Parvate K.; Vinitsky E.; Bayen A.M.Flow: A Modular Learning Framework For Mixed Autonomy TrafficIEEE Transactions on Robotics, 38, 2 (2022)
11516 View0.861Giannini F.; Franze G.; Pupo F.; Fortino G.Autonomous Vehicles In Smart Cities: A Deep Reinforcement Learning SolutionProceedings of the 2022 IEEE International Conference on Dependable, Autonomic and Secure Computing, International Conference on Pervasive Intelligence and Computing, International Conference on Cloud and Big Data Computing, International Conference on Cyber Science and Technology Congress, DASC/PiCom/CBDCom/CyberSciTech 2022 (2022)
48908 View0.859Jang K.; Vinitsky E.; Chalaki B.; Remer B.; Beaver L.; Malikopoulos A.A.; Bayen A.Simulation To Scaled City: Zero-Shot Policy Transfer For Traffic Control Via Autonomous VehiclesICCPS 2019 - Proceedings of the 2019 ACM/IEEE International Conference on Cyber-Physical Systems (2019)