Smart City Gnosys

Smart city article details

Title Reward-Based Energy-Aware Routing Protocols In Wireless Sensor Networks
ID_Doc 46607
Authors Salama R.; Alturjman S.; Altrjman C.; Al-Turjman F.
Year 2024
Published Sustainable Civil Infrastructures
DOI http://dx.doi.org/10.1007/978-3-031-78276-3_87
Abstract Smart cities, industrial automation, healthcare, and environmental monitoring are just a few of the industries that have seen revolutionary changes as a result of Wireless Sensor Networks’ (WSNs) explosive growth. WSNs are significantly hampered by the limited energy resources of their sensor nodes, which are usually battery-powered and placed in inaccessible locations, despite their wide-ranging applications. As a result, energy efficiency is now a crucial component of these networks’ operation and design. Though these protocols frequently fail to adjust to the dynamic and unpredictable nature of WSNs, traditional energy-aware routing methods have been created to decrease energy consumption by choosing the best paths for data transfer. By allowing sensor nodes to learn from their surroundings and make wise routing decisions, the machine learning field of reinforcement learning (RL) provides a viable answer to these problems. In contrast to conventional techniques, RL-based protocols provide continuous energy optimization by dynamically modifying their routing strategies in response to real-time feedback. This paper investigates how to include RL methods—specifically, Q-learning and SARSA—into WSN energy-aware routing protocols. The suggested method creates a more distributed and scalable solution by enabling each node to independently learn and modify its routing choices. The effectiveness of RL-based routing protocols is compared to traditional methods through comprehensive simulations. The findings show that by slowing down the rate at which energy is being used up by each node, RL not only increases energy efficiency but also lengthens the network's operational life. In particular, Q-learning demonstrates a notable improvement in sustaining data delivery rates and network connectivity under various circumstances. Although a little more computationally demanding, the SARSA algorithm offers greater flexibility in situations where network topology changes often. This study adds to the expanding corpus of research on intelligent networking by offering a thorough examination of RL's potential in WSNs. By highlighting the advantages of RL in terms of adaptability, robustness, and energy conservation, the findings open the door for further research and development in this field. Along with outlining potential directions for future research, such as the application of deep reinforcement learning and cooperative multi-agent systems, the study also addresses the difficulties in implementing reinforcement learning in environments with limited resources. These difficulties include the trade-offs between learning speed and energy consumption. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2024.
Author Keywords and routing protocols are all related to energy-aware routing; energy efficiency; Q-learning; reinforcement learning; SARSA; Wireless sensor networks (WSNs)


Similar Articles


Id Similarity Authors Title Published
22075 View0.924Priyadarshi R.; Teja P.R.; Vishwakarma A.K.; Ranjan R.Effective Node Deployment In Wireless Sensor Networks Using Reinforcement Learning2025 IEEE 14th International Conference on Communication Systems and Network Technologies, CSNT 2025 (2025)
23355 View0.898Poongodi T.; Sharma R.K.Energy Optimized Route Selection In Wsns For Smart Iot Applications2nd IEEE International Conference on Distributed Computing and Electrical Circuits and Electronics, ICDCECE 2023 (2023)
989 View0.894Saradha K.R.; Sakthy S.S.A Comprehensive Survey On Energy-Efficient Wireless Sensor Network Protocols For Real-Time Applications2025 International Conference on Computing and Communication Technologies, ICCCT 2025 (2025)
29744 View0.887Aleem A.; Thumma R.Hybrid Energy-Efficient Clustering With Reinforcement Learning For Iot-Wsns Using Knapsack And K-MeansIEEE Sensors Journal (2025)
44899 View0.886Bugarčić P.; Jevtić N.; Malnar M.Reinforcement Learning-Based Routing Protocols In Vehicular And Flying Ad Hoc Networks – A Literature Survey; [Protokoli Rutiranja Bazirani Na Učenju Potkrepljivanjem Za Bežične Ad Hoc Mreže Za Vozila I Bespilotne Letelice – Pregled Literature]Promet - Traffic and Transportation, 34, 6 (2022)
18034 View0.886Sachithanandam V.; D. J.; V.S. B.; Manoharan M.Deep Reinforcement Learning And Enhanced Optimization For Real-Time Energy Management In Wireless Sensor NetworksSustainable Computing: Informatics and Systems, 45 (2025)
39227 View0.885Pratap Singh S.; Kumar N.; Saleh Alghamdi N.; Dhiman G.; Viriyasitavat W.; Sapsomboon A.Next-Gen Wsn Enabled Iot For Consumer Electronics In Smart City: Elevating Quality Of Service Through Reinforcement Learning-Enhanced Multi-Objective StrategiesIEEE Transactions on Consumer Electronics, 70, 4 (2024)
23443 View0.876Pramod M.S.; Balodi A.; Pratik A.; Satya Sankalp G.; Varshita B.; Amrit R.Energy-Effcient Reinforcement Learning In Wireless Sensor Networks Using 5G For Smart CitiesApplications of 5G and Beyond in Smart Cities (2023)
23222 View0.876Amit; Hanji G.Energy Efficiency Techniques In Wireless Sensor Network For Sensor Nodes2024 Global Conference on Communications and Information Technologies, GCCIT 2024 (2024)
15004 View0.873Bellani S.; Yadav M.Comparative Analysis Of Neural Network-Based Routing Algorithms For Wireless Sensor NetworksIEEE International Conference on Signal Processing and Advance Research in Computing, SPARC 2024 (2024)