| Abstract |
With the growth of urban population, a series of urban problems have emerged, and how to speed up smart city construction has received extensive attention. Remote sensing images have the advantages of wide spatial coverage and rich information, and it is suitable for use as research data for smart cities. However, due to limitations in the imaging sensor conditions and complex weather, remote sensing images face the problems of insufficient resolution and cloud occlusion, which cannot meet the resolution requirements of smart city tasks. The remote sensing image super-resolution (SR) technique can improve the details and texture information without upgrading the imaging sensor system, which becomes a feasible solution for the above problems. In this paper, we propose a novel remote sensing image super-resolution method which leverages the texture features from internal and external references to help with SR reconstruction. We introduce the transformer attention mechanism to select and extract parts of texture features with high reference values to ensure that the network is lightweight, effective, and easier to deploy on edge computing devices. In addition, our network can automatically learn and adjust the alignment angles and scales of texture features for better SR results. Extensive comparison experiments show that our proposed method achieves superior performance compared with several state-of-the-art SR methods. In addition, we also evaluate the application value of our proposed SR method in urban region function recognition in smart cities. The dataset used in this task is low-quality. The comparative experiment between the original dataset and the SR dataset generated by our proposed SR method indicates that our method can effectively improve the recognition accuracy. © 2022 by the authors. Licensee MDPI, Basel, Switzerland. |