Smart City Gnosys

Smart city article details

Title A Hysteretic Q-Learning Coordination Framework For Emerging Mobility Systems In Smart Cities
ID_Doc 2234
Authors Chalaki B.; Malikopoulos A.A.
Year 2021
Published 2021 European Control Conference, ECC 2021
DOI http://dx.doi.org/10.23919/ECC54610.2021.9655172
Abstract Connected and automated vehicles (CAVs) can alleviate traffic congestion, air pollution, and improve safety. In this paper, we provide a decentralized coordination framework for CAVs at a signal-free intersection to minimize travel time and improve fuel efficiency. We employ a simple yet powerful reinforcement learning approach, an off-policy temporal difference learning called Q-learning, enhanced with a coordination mechanism to address this problem. Then, we integrate a first-in-first-out queuing policy to improve the performance of our system. We demonstrate the efficacy of our proposed approach through simulation and comparison with the classical optimal control method based on Pontryagin's minimum principle. © 2021 EUCA.
Author Keywords


Similar Articles


Id Similarity Authors Title Published
4034 View0.876Wang E.; Memar F.H.; Korzelius S.; Sadek A.W.; Qiao C.A Reinforcement Learning Approach To Cav And Intersection Control For Energy EfficiencyProceedings - 2022 5th International Conference on Connected and Autonomous Driving, MetroCAD 2022 (2022)
44887 View0.86Joo H.; Lim Y.Reinforcement Learning For Traffic Signal Timing OptimizationInternational Conference on Information Networking, 2020-January (2020)
8536 View0.86Chen, C; Zhang, YR; Khosravi, MR; Pei, QQ; Wan, SHAn Intelligent Platooning Algorithm For Sustainable Transportation Systems In Smart CitiesIEEE SENSORS JOURNAL, 21, 14 (2021)
38227 View0.858Lam H.C.; Wong R.T.K.; Jasser M.B.; Chua H.N.; Issa B.Multi-Junction Traffic Light Control System With Reinforcement Learning In Sunway Smart City2024 IEEE International Conference on Automatic Control and Intelligent Systems, I2CACIS 2024 - Proceedings (2024)
58670 View0.857Agafonov A.; Myasnikov V.Traffic Signal Control: A Double Q-Learning ApproachProceedings of the 16th Conference on Computer Science and Intelligence Systems, FedCSIS 2021 (2021)
28394 View0.857Ren Y.; Xie R.; Yu F.R.; Huang T.; Liu Y.Green Intelligence Networking For Connected And Autonomous Vehicles In Smart CitiesIEEE Transactions on Green Communications and Networking, 6, 3 (2022)
40692 View0.855Suanpang P.; Jamjuntr P.Optimization Regenerative Braking In Electric Vehicles Using Q-Learning For Improving Decision-Making In Smart CitiesDecision Making: Applications in Management and Engineering, 8, 1 (2025)
38103 View0.855Sabit H.Multi-Agent Reinforcement Learning For Smart City Automated Traffic Light ControlProceedings - 2023 IEEE International Conference on High Performance Computing and Communications, Data Science and Systems, Smart City and Dependability in Sensor, Cloud and Big Data Systems and Application, HPCC/DSS/SmartCity/DependSys 2023 (2023)
44895 View0.854Barta Z.; Kovács S.; Botzheim J.Reinforcement Learning-Based Cooperative Traffic Control SystemLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 14811 LNAI (2024)
4597 View0.853Giannini F.; Franzè G.; Pupo F.; Fortino G.A Set-Theoretic Receding Horizon Control Based On A Q-Learning Approach For Sustainability Purposes9th 2023 International Conference on Control, Decision and Information Technologies, CoDIT 2023 (2023)