Smart City Gnosys

Smart city article details

Title Microservice Instances Selection And Load Balancing In Fog Computing Using Deep Reinforcement Learning Approach
ID_Doc 37003
Authors Boudieb W.; Malki A.; Malki M.; Badawy A.; Barhamgi M.
Year 2024
Published Future Generation Computer Systems, 156
DOI http://dx.doi.org/10.1016/j.future.2024.03.010
Abstract Fog-native computing is an emerging paradigm that makes it possible to build flexible and scalable Internet of Things (IoT) applications using microservice architecture at the network edge. With this paradigm, IoT applications are decomposed into multiple fine-grained microservices, strategically deployed on various fog nodes to support a wide range of IoT scenarios, such as smart cities and smart farming. Nonetheless, the performance of these IoT applications is affected by their limited effectiveness in processing offloaded IoT requests originating from multiple IoT devices. Specifically, the requested IoT services are composed of multiple dependent microservice instances collectively referred to as a service plan (SP). Each SP comprises a series of tasks designed to be executed in a predefined order, with the objective of meeting heterogeneous Quality of Service (QoS) requirements (e.g., low service delays). Different from the cloud, selecting the appropriate service plan for each IoT request can be a challenging task in dynamic fog environments due to the dependency and decentralization of microservice instances, along with the instability of network conditions and service requests (i.e., change quickly over time). To deal with this challenge, we study the microservice instances selection problem for IoT applications deployed on fog platforms and propose a learning-based approach that employs Deep Reinforcement Learning (DRL) to compute the optimal service plans. The latter optimizes the delay of application requests while effectively balancing the load among microservice instances. In our selection process, we carefully address the plan-dependency to efficiently select valid service plans for every request by introducing two distinct approaches; an action masking approach and an adaptive action mapping approach. Additionally, we propose an improved experience replay to address delayed action effects and enhance our model training efficiency. A series of experiments were conducted to assess the performance of our Microservice Instances Selection Policy (MISP) approach. The results demonstrate that our model reduces the average failure rate by up to 65% and improves load balance by up to 45% on average when compared to the baseline algorithms. © 2024 Elsevier B.V.
Author Keywords Deadline-aware; Deep reinforcement learning; Fog computing; Internet of Things; Load balancing; Microservice selection


Similar Articles


Id Similarity Authors Title Published
18071 View0.893Bushehrian O.; Moazeni A.Deep Reinforcement Learning-Based Optimal Deployment Of Iot Machine Learning Jobs In Fog Computing ArchitectureComputing, 107, 1 (2025)
46071 View0.885Cui X.Resource Allocation In Iot Edge Computing Networks Based On Reinforcement LearningAdvances in Transdisciplinary Engineering, 70 (2025)
60278 View0.881Bansal M.; Chana I.; Clarke S.Urbanenqosplace: A Deep Reinforcement Learning Model For Service Placement Of Real-Time Smart City Iot ApplicationsIEEE Transactions on Services Computing, 16, 4 (2023)
38436 View0.869Matsuoka H.; Moustafa A.Multi-Task Deep Reinforcement Learning For Iot Service SelectionInternational Conference on Agents and Artificial Intelligence, 3 (2022)
23430 View0.868Sellami B.; Hakiri A.; Yahia S.B.; Berthou P.Energy-Aware Task Scheduling And Offloading Using Deep Reinforcement Learning In Sdn-Enabled Iot NetworkComputer Networks, 210 (2022)
26786 View0.868Al-Hashimi M.A.A.; Rahiman A.R.; Muhammed A.; Hamid N.A.W.Fog-Cloud Scheduling Simulator For Reinforcement Learning AlgorithmsInternational Journal of Information Technology (Singapore), 17, 5 (2025)
26370 View0.866Han Y.; Li D.; Qi H.; Ren J.; Wang X.Federated Learning-Based Computation Offloading Optimization In Edge Computing-Supported Internet Of ThingsACM International Conference Proceeding Series (2019)
7415 View0.866Moghaddasi K.; Rajabi S.; Gharehchopogh F.S.; Ghaffari A.An Advanced Deep Reinforcement Learning Algorithm For Three-Layer D2D-Edge-Cloud Computing Architecture For Efficient Task Offloading In The Internet Of ThingsSustainable Computing: Informatics and Systems, 43 (2024)
47495 View0.863Zhu J.; Chen H.; Wang H.Sdt-Mcs: Topology-Aware Microservice Orchestration With Adaptive Learning In Cloud-Edge EnvironmentsConcurrency and Computation: Practice and Experience, 37, 18-20 (2025)
21373 View0.863Abdelghany H.M.Dynamic Resource Management And Task Offloading Framework For Fog ComputingJournal of Grid Computing, 23, 2 (2025)