Smart City Gnosys

Smart city article details

Title Hybrid Reinforcement Learning For Data Stream Freshness In Autonomous Vehicle Networks
ID_Doc 29813
Authors Ebrahimi D.; Ghosh P.; Alzhouri F.; De Oliveira T.E.A.
Year 2024
Published Proceedings - IEEE Global Communications Conference, GLOBECOM
DOI http://dx.doi.org/10.1109/GLOBECOM52923.2024.10901707
Abstract Autonomous vehicles (AVs) are poised to become integral components of intelligent transportation systems, particularly within the framework of future smart cities. Traditional performance metrics such as throughput and latency fall short in adequately addressing the temporal relevance and freshness of data in critical applications such as autonomous driving and accident prevention. Consequently, this paper delves into the challenge of reducing the Age of Information (AoI) for disseminating data streams within AV-assisted vehicular networks. Given the dynamic nature of the environment, the problem is formulated as a Markov decision process and tackled using Q-learning and DDQN, both prominent reinforcement learning (RL) algorithms. Additionally, a heuristic approach is introduced to augment the performance of the RL algorithms, expediting environmental learning convergence. The numerical findings underscore the effectiveness of the proposed methodologies in minimizing the aggregate AoI across all data streams. © 2024 IEEE.
Author Keywords


Similar Articles


Id Similarity Authors Title Published
40782 View0.958Ebrahimi D.; Ghosh P.; Alzhouri F.; De Oliveira T.E.A.Optimizing Data Stream Freshness For Enhanced Communication In Autonomous Vehicle NetworksIEEE Wireless Communications and Networking Conference, WCNC (2025)
28394 View0.853Ren Y.; Xie R.; Yu F.R.; Huang T.; Liu Y.Green Intelligence Networking For Connected And Autonomous Vehicles In Smart CitiesIEEE Transactions on Green Communications and Networking, 6, 3 (2022)
44881 View0.852Teixeira L.H.; Huszák Á.Reinforcement Learning Environment For Advanced Vehicular Ad Hoc Networks Communication SystemsSensors, 22, 13 (2022)
2234 View0.852Chalaki B.; Malikopoulos A.A.A Hysteretic Q-Learning Coordination Framework For Emerging Mobility Systems In Smart Cities2021 European Control Conference, ECC 2021 (2021)
44899 View0.851Bugarčić P.; Jevtić N.; Malnar M.Reinforcement Learning-Based Routing Protocols In Vehicular And Flying Ad Hoc Networks – A Literature Survey; [Protokoli Rutiranja Bazirani Na Učenju Potkrepljivanjem Za Bežične Ad Hoc Mreže Za Vozila I Bespilotne Letelice – Pregled Literature]Promet - Traffic and Transportation, 34, 6 (2022)