Smart City Gnosys

Smart city article details

Title Semi-Supervised Learning Based 3D Mesh Building Facade Extraction And Semantic Segmentation Method; [基于半监督学习的三维 Mesh 建筑物立面提取与语义分割方法]
ID_Doc 48319
Authors Cheng H.; Zi W.; Peng S.; Chen H.
Year 2023
Published Journal of Zhengzhou University - Natural Science, 55, 4
DOI http://dx.doi.org/10.13705/j.issn.1671-6841.2022161
Abstract Automatic extraction and semantic segmentation of building facades for 3D Mesh data was widely used in the fields of smart city modeling and analysis, digital twin city applications, and urban planning and construction. A semantic segmentation model of building facades for 3D Mesh data based on deep learning methods was proposed. Currently, a large number of deep learning approaches based on Convolutional Neural Networks (CNN) were fully supervised, and their performance depended heavily on the quality of the manually labeled training date set. However, high-quality manual annotated datasets of mesh 3D scenes were expensive and scarce. In view of this, a Mesh 3D building facades automatic extraction approach was designed, a semantic segmentation method for 3D building facades based on mean-teacher semi-supervised learning was proposed. In order to enhance the performance of our method further, the feature spatial relationship regularization method (FESTA) was introduced, which combined the neighborhood structure in terms of space and features, and used unlabeled data to improve the model classification accuracy. Finally, the effectiveness and usability of the proposed method were experimentally verified. © 2023 Editorial Department of Journal of Zhengzhou University. All Rights Reserved.
Author Keywords 3D Mesh building facade; deep learning; semantic segmentation; semi-supervised learning


Similar Articles


Id Similarity Authors Title Published
17915 View0.853Mahmoud M.; Adham M.; Li Y.; Chen W.Deep Learning Semantic Segmentation-Based Scan-To-Bim For Indoor Point Clouds2025 15th International Conference on Electrical Engineering, ICEENG 2025 (2025)