Smart City Gnosys

Smart city article details

Title Deep Point Reinforcement Learning Approach For Sustainable Communications By Uav And Moving Interaction Station
ID_Doc 18022
Authors Chen L.; Liu K.; Yang P.; Xiong Z.; An P.; Quek T.Q.S.; Zhang Z.
Year 2025
Published IEEE Transactions on Vehicular Technology
DOI http://dx.doi.org/10.1109/TVT.2025.3572350
Abstract Unmanned aerial vehicles (UAVs) have emerged as a critical component in the smart city, which can significantly enhance integrated sensing and communication (ISAC) performance. This paper mainly investigates the UAV-to-Vehicle (U2V) communication scenarios, where vehicles are represented as rigid shapes in the radar point cloud (RPC). The moving interaction station (MIS) is proposed to provide the sensing-assisted and wireless charging service for the UAV. The radio knowledge map (RKM) is introduced to improve the communication and energy efficiency of the UAV-ISAC system. Then, a joint optimization problem is formulated to complete the data collection and upload task by adjusting the UAV trajectory and vehicle access. To address this problem, a deep point reinforcement learning (DPRL) algorithm is proposed, which contains an RPC network, an RKM network, and a decision-making module. Herein, the RPC and RKM networks are designed to merge and map the vehicle RPC and RKM into the action spaces. The decision-making module selects actions from the action spaces to optimize the UAV trajectory and vehicle access. Simulation results show that the proposed DPRL algorithm outperforms the benchmarks, achieving approximately a 10.87% increase in channel capacity and a 24.08% enhancement in residual energy. © 1967-2012 IEEE.
Author Keywords ISAC; moving interaction station; radar point cloud; radio knowledge map; reinforcement learning; UAV communications


Similar Articles


Id Similarity Authors Title Published
3552 View0.902Chen L.; Liu K.; Li B.; Yang Q.; Gao Q.; Zhang Z.A Novel Sustainable Aiot Scheme For Aav-Assisted Communication Enabled By Radar Point Clouds And Moving Interaction StationIEEE Internet of Things Journal, 12, 9 (2025)
47814 View0.866Fu F.; Jiao Q.; Yu F.R.; Zhang Z.; Du J.Securing Uav-To-Vehicle Communications: A Curiosity-Driven Deep Q-Learning Network (C-Dqn) Approach2021 IEEE International Conference on Communications Workshops, ICC Workshops 2021 - Proceedings (2021)
2337 View0.861Xi M.; Dai H.; He J.; Li W.; Wen J.; Xiao S.; Yang J.A Lightweight Reinforcement-Learning-Based Real-Time Path-Planning Method For Unmanned Aerial VehiclesIEEE Internet of Things Journal, 11, 12 (2024)
16152 View0.86Kim J.; Park S.; Jung S.; Cordeiro C.Cooperative Multi-Uav Positioning For Aerial Internet Service Management: A Multi-Agent Deep Reinforcement Learning ApproachIEEE Transactions on Network and Service Management, 21, 4 (2024)
5700 View0.855Oubbati O.S.; Alotaibi J.; Alromithy F.; Atiquzzaman M.; Altimania M.R.A Uav-Ugv Cooperative System: Patrolling And Energy Management For Urban MonitoringIEEE Transactions on Vehicular Technology (2025)
38480 View0.851Vishnoi V.; Budhiraja I.; Garg D.; Garg S.; Choi B.J.; Hossain M.S.Multiagent Deep Reinforcement Learning Based Energy Efficient Resource Management Scheme For Ris Assisted D2D Users In 6G-Aided Smart Cities EnvironmentAlexandria Engineering Journal, 116 (2025)
59291 View0.851Liu S.; Sun G.; Teng S.; Li J.; Zhang C.; Wang J.; Du H.; Liu Y.Uav-Enabled Collaborative Secure Data Transmission Via Hybrid-Action Multi-Agent Deep Reinforcement LearningProceedings - IEEE Global Communications Conference, GLOBECOM (2024)
59311 View0.85Ullah Z.; Al-Turjman F.; Moatasim U.; Mostarda L.; Gagliardi R.Uavs Joint Optimization Problems And Machine Learning To Improve The 5G And Beyond CommunicationComputer Networks, 182 (2020)