Smart City Gnosys

Smart city article details

Title Joint Optimization Of Spectrum And Energy Efficiency Considering The C-V2X Security: A Deep Reinforcement Learning Approach
ID_Doc 34403
Authors Liu Z.; Han Y.; Fan J.; Zhang L.; Lin Y.
Year 2020
Published IEEE International Conference on Industrial Informatics (INDIN), 2020-July
DOI http://dx.doi.org/10.1109/INDIN45582.2020.9442103
Abstract Cellular vehicle-To-everything (C-V2X) communication, as a part of 5G wireless communications, has been considered one of the most significant techniques for Smart City. Vehicles platooning is an application of Smart City that improves traffic capacity and safety by C-V2X. However, different from vehicles platooning travelling on highways, C-V2X could be more easily eavesdropped and the spectrum resource could be limited when vehicles converge at an intersection. Satisfying the secrecy rate of C-V2X, how to increase the spectrum efficiency (SE) and energy efficiency (EE) in the platooning network is a big challenge. In this paper, to solve this problem, a Security-Aware Approach to Enhancing SE and EE Based on Deep Reinforcement Learning is proposed, named SEED. The SEED formulates an objective optimization function considering both SE and EE, and the secrecy rate of C-V2X is treated as a critical constraint of this function. The optimization problem is transformed into the spectrum and transmission power selections of V2X links using deep Q network (DQN). The heuristic result of SE and EE is obtained by the DQN based on rewards mechanism. Finally, the traffic and communication environments are simulated by Python 3. The evaluation results demonstrate that the SEED outperforms the DQN-wopa algorithm and the baseline algorithm by 31.83% and 68.40% in efficiency, respectively. © 2020 IEEE.
Author Keywords 5G; C-V2X; deep reinforcement learning; energy efficiency; Smart City; spectrum efficiency


Similar Articles


Id Similarity Authors Title Published
4085 View0.917Verma R.; Singh S.K.A Residual Network Based Combined Spectrum And Energy Efficiency OptimizationProceedings of International Conference on Computing, Communication, Security and Intelligent Systems, IC3SIS 2022 (2022)
4083 View0.885Hong S.; Kim J.; Kim G.; Cho S.A Research Trends Of Reinforcement Learning Algorithms For C-V2X Network Resource AllocationInternational Conference on Ubiquitous and Future Networks, ICUFN (2024)
1390 View0.883Wei H.; Peng Y.; Yue M.; Long J.; AL-Hazemi F.; Mirza M.M.A Deep Reinforcement Learning Scheme For Spectrum Sensing And Resource Allocation In ItsMathematics, 11, 16 (2023)
16160 View0.864Sharma S.; Singh B.Cooperative Reinforcement Learning Based Adaptive Resource Allocation In V2V Communication2019 6th International Conference on Signal Processing and Integrated Networks, SPIN 2019 (2019)
43045 View0.863Shakir A.T.; Masini B.M.; Khudhair N.R.; Nordin R.; Amphawan A.Priority-Aware Multi-Agent Deep Reinforcement Learning For Resource Scheduling In C-V2X Mode 4 CommunicationIEEE Access (2025)
11738 View0.855Ye J.; Ge X.Beam Management Optimization For V2V Communications Based On Deep Reinforcement LearningScientific Reports, 13, 1 (2023)
24038 View0.853Vieira M.A.; Galvão G.; Vieira M.; Véstias M.; Louro P.; Jardim-Goncalves R.Enhancing Traffic Flow With Visible Light Communication: A Deep Reinforcement Learning ApproachProceedings of SPIE - The International Society for Optical Engineering, 13374 (2025)
10106 View0.851Raj R.; Kumar A.; Mandloi A.; Pal R.Applications Of Machine Learning And 5G New Radio Vehicle-To-Everything Communication In Smart CitiesSignals and Communication Technology, Part F1293 (2024)
16142 View0.851Wang J.; Topilin I.; Feofilova A.; Shao M.; Wang Y.Cooperative Intelligent Transport Systems: The Impact Of C-V2X Communication Technologies On Road Safety And Traffic EfficiencySensors, 25, 7 (2025)