Smart City Gnosys

Smart city article details

Title A Continuous Actor-Critic Deep Q-Learning-Enabled Deployment Of Uav Base Stations: Toward 6G Small Cells In The Skies Of Smart Cities
ID_Doc 1087
Authors Parvaresh N.; Kantarci B.
Year 2023
Published IEEE Open Journal of the Communications Society, 4
DOI http://dx.doi.org/10.1109/OJCOMS.2023.3251297
Abstract Uncrewed aerial vehicle-mounted base stations (UAV-BSs), also know as drone base stations, are considered to have promising potential to tackle the limitations of ground base stations. They can provide cost-effective Internet connection to users that are out of infrastructure. They can also take over quickly as service providers when ground base stations fail in an unanticipated manner. UAV-BSs benefit from their mobile nature that enables them to change their 3D locations if the demand profile changes rapidly. In order to effectively leverage the mobility of UAV-BSs so as to maximize the performance of the network, 3D location of UAV-BSs requires continuous optimization. However, solving the optimization problem of UAV-BSs is NP-hard with no deterministic solution in polynomial time. In this paper, we propose a continuous actor-critic deep reinforcement learning solution in order to solve the location optimization problem of UAV-BSs in the presence of mobile endpoints. The simulation results show that the proposed model significantly improves the network performance compared to Q-learning, deep Q-learning and conventional algorithms. While the Q-learning and deep Q-learning-based baselines reach the sum data rate of 35 Mbps and 42 Mbps respectively, our proposed ACDQL-based strategy maximizes the sum data rate of endpoints to 45 Mbps. Furthermore, the proposed ACDQL-based methodology reduces the convergence time of the UAV-BS placement optimization by 85 percent compared to the Q-learning and deep Q-learning baselines. © 2020 IEEE.
Author Keywords 5G; 6G; actor-critic deep reinforcement learning; Aerial base stations; artificial intelligence; deep Q-learning; deep reinforcement learning; Q-learning; reinforcement learning


Similar Articles


Id Similarity Authors Title Published
16152 View0.87Kim J.; Park S.; Jung S.; Cordeiro C.Cooperative Multi-Uav Positioning For Aerial Internet Service Management: A Multi-Agent Deep Reinforcement Learning ApproachIEEE Transactions on Network and Service Management, 21, 4 (2024)
59281 View0.86Zheng Q.; Shen Z.; Jin J.; Lei Z.; Cheung T.; Xiang W.Uav-Assisted Intelligent Iot Service Provisioning In Infrastructure-Less EnvironmentsProceedings - 2024 IEEE Annual Congress on Artificial Intelligence of Things, AIoT 2024 (2024)
59297 View0.857Deng C.; Fang X.; Wang X.Uav-Enabled Mobile-Edge Computing For Ai Applications: Joint Model Decision, Resource Allocation, And Trajectory OptimizationIEEE Internet of Things Journal, 10, 7 (2023)
38480 View0.856Vishnoi V.; Budhiraja I.; Garg D.; Garg S.; Choi B.J.; Hossain M.S.Multiagent Deep Reinforcement Learning Based Energy Efficient Resource Management Scheme For Ris Assisted D2D Users In 6G-Aided Smart Cities EnvironmentAlexandria Engineering Journal, 116 (2025)
47814 View0.854Fu F.; Jiao Q.; Yu F.R.; Zhang Z.; Du J.Securing Uav-To-Vehicle Communications: A Curiosity-Driven Deep Q-Learning Network (C-Dqn) Approach2021 IEEE International Conference on Communications Workshops, ICC Workshops 2021 - Proceedings (2021)
58717 View0.854Tao X.; Hafid A.S.Trajectory Design In Uav-Aided Mobile Crowdsensing: A Deep Reinforcement Learning ApproachIEEE International Conference on Communications (2021)
2337 View0.853Xi M.; Dai H.; He J.; Li W.; Wen J.; Xiao S.; Yang J.A Lightweight Reinforcement-Learning-Based Real-Time Path-Planning Method For Unmanned Aerial VehiclesIEEE Internet of Things Journal, 11, 12 (2024)