Smart City Gnosys

Smart city article details

Title Dynamic Resource Management And Task Offloading Framework For Fog Computing
ID_Doc 21373
Authors Abdelghany H.M.
Year 2025
Published Journal of Grid Computing, 23, 2
DOI http://dx.doi.org/10.1007/s10723-025-09804-7
Abstract Fog computing has emerged as a pivotal paradigm for enabling low-latency, high-performance applications by positioning computational resources closer to the network edge. However, task offloading in fog environments poses significant challenges because of the dynamic and heterogeneous nature of fog nodes, which are influenced by fluctuating computational loads, channel conditions, and mobility patterns. This paper introduces a dynamic task-offloading framework leveraging deep Q-learning (DQL), a reinforcement learning technique tailored to optimize task allocation and enhance system performance in such complex settings. The proposed framework models the offloading problem as a Markov decision process (MDP), enabling the DQL agent to learn optimal strategies adaptively by incorporating factors such as task demands, node states, and channel quality. Performance evaluations against state-of-the-art scheduling methods reveal that the DQL-based approach consistently outperforms competing techniques, achieving superior efficiency and reliability. Furthermore, the framework demonstrates scalability and robustness in dynamic fog networks, making it highly suitable for diverse real-time applications. This study highlights the potential of DQL as a transformative solution for dynamic task offloading in fog computing, offering efficient resource management and system stability. Its applicability to domains such as intelligent transportation systems, smart cities, and the IoT underscores its practical relevance and future impact. © The Author(s) 2025.
Author Keywords Deep Q-learning; Dynamic task offloading; Fog computing; Reinforcement learning


Similar Articles


Id Similarity Authors Title Published
26075 View0.899Nagabushnam G.; Kim K.H.Faddeer: A Deep Multi-Agent Reinforcement Learning-Based Scheduling Algorithm For Aperiodic Tasks In Heterogeneous Fog Computing NetworksCluster Computing, 28, 6 (2025)
6353 View0.895Choppara P.; Mangalampalli S.S.Adaptive Task Scheduling In Fog Computing Using Federated Dqn And K-Means ClusteringIEEE Access, 13 (2025)
7415 View0.892Moghaddasi K.; Rajabi S.; Gharehchopogh F.S.; Ghaffari A.An Advanced Deep Reinforcement Learning Algorithm For Three-Layer D2D-Edge-Cloud Computing Architecture For Efficient Task Offloading In The Internet Of ThingsSustainable Computing: Informatics and Systems, 43 (2024)
23430 View0.888Sellami B.; Hakiri A.; Yahia S.B.; Berthou P.Energy-Aware Task Scheduling And Offloading Using Deep Reinforcement Learning In Sdn-Enabled Iot NetworkComputer Networks, 210 (2022)
39777 View0.886Mattia G.P.; Beraldi R.On Real-Time Scheduling In Fog Computing: A Reinforcement Learning Algorithm With Application To Smart Cities2022 IEEE International Conference on Pervasive Computing and Communications Workshops and other Affiliated Events, PerCom Workshops 2022 (2022)
54442 View0.885Zhao X.; Liu M.; Li M.Task Offloading Strategy And Scheduling Optimization For Internet Of Vehicles Based On Deep Reinforcement LearningAd Hoc Networks, 147 (2023)
46071 View0.877Cui X.Resource Allocation In Iot Edge Computing Networks Based On Reinforcement LearningAdvances in Transdisciplinary Engineering, 70 (2025)
40037 View0.876Proietti Mattia G.; Beraldi R.Online Decentralized Scheduling In Fog Computing For Smart Cities Based On Reinforcement LearningIEEE Transactions on Cognitive Communications and Networking, 10, 4 (2024)
40900 View0.876Rahmani A.M.; Haider A.; Khoshvaght P.; Gharehchopogh F.S.; Moghaddasi K.; Rajabi S.; Hosseinzadeh M.Optimizing Task Offloading With Metaheuristic Algorithms Across Cloud, Fog, And Edge Computing Networks: A Comprehensive Survey And State-Of-The-Art SchemesSustainable Computing: Informatics and Systems, 45 (2025)
54443 View0.876Wu B.; Ma L.; Cong J.; Zhao J.; Yang Y.Task Offloading Strategy Based On Improved Double Deep Q Network In Smart CitiesWireless Networks, 31, 5 (2025)