Smart City Gnosys

Smart city article details

Title Joint Task Offloading And Power Control Optimization For Iot-Enabled Smart Cities: An Energy-Efficient Coordination Via Deep Reinforcement Learning
ID_Doc 34433
Authors Yao R.; Liu L.; Zuo X.; Yu L.; Xu J.; Fan Y.; Li W.
Year 2025
Published IEEE Transactions on Consumer Electronics
DOI http://dx.doi.org/10.1109/TCE.2025.3577809
Abstract Mobile Edge Computing (MEC) enhances computational efficiency by reducing data transmission distance, yet optimizing resource allocation and reducing operational cost remain critical challenges as the number of users grows. This paper investigates a multi-user partial computation offloading system under the time-varying channel environment and proposes a novel deep reinforcement learning-based framework to jointly optimize offloading strategy and power control, aiming to minimize the weighted sum of latency and energy consumption. Due to the problem’s multi-parameter, highly coupled, and non-convex characteristics, a deep neural network is firstly utilized to generate offloading ratio vectors, which are then discretized using an improved k-Nearest Neighbor (KNN) algorithm. Based on the quantized offloading actions, the Differential Evolution (DE) algorithm is employed to seek the optimal power control. Finally, the optimal action and state vectors are stored in an experience replay pool for subsequent network training until convergence, producing the optimal solution. Numerical results demonstrate that the proposed improved quantization method avoids the additional action exploration while accelerating convergence. Furthermore, the proposed algorithm significantly lowers user devices latency and energy consumption, outperforming other schemes and providing more efficient edge computing services. © 1975-2011 IEEE.
Author Keywords Deep reinforcement learning; Differential evolution algorithm; Mobile edge computing; Partial offloading; Power control


Similar Articles


Id Similarity Authors Title Published
38090 View0.927Jiao T.; Feng X.; Guo C.; Wang D.; Song J.Multi-Agent Deep Reinforcement Learning For Efficient Computation Offloading In Mobile Edge ComputingComputers, Materials and Continua, 76, 3 (2023)
21789 View0.916Tian K.; Chai H.; Liu Y.; Liu B.Edge Intelligence Empowered Dynamic Offloading And Resource Management Of Mec For Smart City Internet Of ThingsElectronics (Switzerland), 11, 6 (2022)
23430 View0.916Sellami B.; Hakiri A.; Yahia S.B.; Berthou P.Energy-Aware Task Scheduling And Offloading Using Deep Reinforcement Learning In Sdn-Enabled Iot NetworkComputer Networks, 210 (2022)
54442 View0.916Zhao X.; Liu M.; Li M.Task Offloading Strategy And Scheduling Optimization For Internet Of Vehicles Based On Deep Reinforcement LearningAd Hoc Networks, 147 (2023)
7415 View0.911Moghaddasi K.; Rajabi S.; Gharehchopogh F.S.; Ghaffari A.An Advanced Deep Reinforcement Learning Algorithm For Three-Layer D2D-Edge-Cloud Computing Architecture For Efficient Task Offloading In The Internet Of ThingsSustainable Computing: Informatics and Systems, 43 (2024)
18069 View0.91Li W.; Chen X.; Jiao L.; Wang Y.Deep Reinforcement Learning-Based Intelligent Task Offloading And Dynamic Resource Allocation In 6G Smart CityProceedings - IEEE Symposium on Computers and Communications, 2023-July (2023)
40621 View0.906Hassan M.T.; Hosain M.K.Optimization Of Computation Offloading In Mobile-Edge Computing Networks With Deep Reinforcement Approach2024 IEEE International Conference on Communication, Computing and Signal Processing, IICCCS 2024 (2024)
18051 View0.906Agbaje P.; Nwafor E.; Olufowobi H.Deep Reinforcement Learning For Energy-Efficient Task Offloading In Cooperative Vehicular Edge NetworksIEEE International Conference on Industrial Informatics (INDIN), 2023-July (2023)
26323 View0.902Chen X.; Liu G.Federated Deep Reinforcement Learning-Based Task Offloading And Resource Allocation For Smart Cities In A Mobile Edge NetworkSensors, 22, 13 (2022)
46071 View0.901Cui X.Resource Allocation In Iot Edge Computing Networks Based On Reinforcement LearningAdvances in Transdisciplinary Engineering, 70 (2025)