Smart City Gnosys

Smart city article details

Title A Cybertwin-Driven Intelligent Offloading Method For Iov Applications Using Drl In Smart Cities
ID_Doc 1212
Authors Liu P.; Peng K.; Zhao B.
Year 2022
Published Proceedings of the 2022 IEEE International Conference on Dependable, Autonomic and Secure Computing, International Conference on Pervasive Intelligence and Computing, International Conference on Cloud and Big Data Computing, International Conference on Cyber Science and Technology Congress, DASC/PiCom/CBDCom/CyberSciTech 2022
DOI http://dx.doi.org/10.1109/DASC/PiCom/CBDCom/Cy55231.2022.9927948
Abstract Internet of Vehicle (IoV) is considered as the enabling platform to improve the traffic condition in cities. Meanwhile, a variety of promising applications such as self-driving and route planning are realized, which generates various latency-critic tasks. These latency-critic tasks present high demands on computing capacity, while resource-restricted vehicles could not satisfy those requirements. Mobile edge computing (MEC) is emerging as a potential solution to mitigate resource-limited vehicles from latency-critic tasks, which enables vehicles to offload tasks to nearby road-side units. Besides, cybertwin can abstract IoV system into a digital entity to facilitate obtaining information and management. However, it is still challenging to leverage cybertwin technology to optimize offloading policy. To this end, we design a deep reinforcement learning method to train the optimal decision-making agent by a cybertwin-driven IoV system, which aims to minimize the system cost under latency constraints. The experimental results illustrate that the agent trained by our proposed method outperforms the other methods. © 2022 IEEE.
Author Keywords Computation Offloading; Cybertwin; Deep Reinforcement Learning; IoV; Latency-Critic Task; Smart Cities


Similar Articles


Id Similarity Authors Title Published
54442 View0.93Zhao X.; Liu M.; Li M.Task Offloading Strategy And Scheduling Optimization For Internet Of Vehicles Based On Deep Reinforcement LearningAd Hoc Networks, 147 (2023)
17056 View0.928Zhang X.; Xing H.; Zang W.; Jin Z.; Shen Y.Cybertwin-Driven Multi-Intelligent Reflecting Surfaces Aided Vehicular Edge Computing Leveraged By Deep Reinforcement LearningIEEE Vehicular Technology Conference, 2022-September (2022)
32466 View0.907Wu Y.; Fang X.; Min G.; Chen H.; Luo C.Intelligent Offloading Balance For Vehicular Edge Computing And NetworksIEEE Transactions on Intelligent Transportation Systems, 26, 5 (2025)
34433 View0.892Yao R.; Liu L.; Zuo X.; Yu L.; Xu J.; Fan Y.; Li W.Joint Task Offloading And Power Control Optimization For Iot-Enabled Smart Cities: An Energy-Efficient Coordination Via Deep Reinforcement LearningIEEE Transactions on Consumer Electronics (2025)
18051 View0.891Agbaje P.; Nwafor E.; Olufowobi H.Deep Reinforcement Learning For Energy-Efficient Task Offloading In Cooperative Vehicular Edge NetworksIEEE International Conference on Industrial Informatics (INDIN), 2023-July (2023)
38090 View0.891Jiao T.; Feng X.; Guo C.; Wang D.; Song J.Multi-Agent Deep Reinforcement Learning For Efficient Computation Offloading In Mobile Edge ComputingComputers, Materials and Continua, 76, 3 (2023)
54435 View0.885Chabi Sika Boni A.K.; Hassan H.; Drira K.Task Offloading In Autonomous Iot Systems Using Deep Reinforcement Learning And Ns3-GymACM International Conference Proceeding Series (2021)
54441 View0.881Zeng J.; Gou F.; Wu J.Task Offloading Scheme Combining Deep Reinforcement Learning And Convolutional Neural Networks For Vehicle Trajectory Prediction In Smart CitiesComputer Communications, 208 (2023)
18053 View0.879Kansal V.; Shnain A.H.; Deepak A.; Rana A.; Manjunatha; Dixit K.K.; Rajkumar K.V.Deep Reinforcement Learning For Iot-Based Smart Traffic Management SystemsProceedings of International Conference on Contemporary Computing and Informatics, IC3I 2024 (2024)
46071 View0.877Cui X.Resource Allocation In Iot Edge Computing Networks Based On Reinforcement LearningAdvances in Transdisciplinary Engineering, 70 (2025)