| Title |
Attention-Guided Multi-View Stereo Network For Depth Estimation |
| ID_Doc |
11047 |
| Authors |
Sun P.; Wu S.; Lin K. |
| Year |
2020 |
| Published |
Proceedings - 2020 IEEE 22nd International Conference on High Performance Computing and Communications, IEEE 18th International Conference on Smart City and IEEE 6th International Conference on Data Science and Systems, HPCC-SmartCity-DSS 2020 |
| DOI |
http://dx.doi.org/10.1109/HPCC-SmartCity-DSS50907.2020.00106 |
| Abstract |
The purpose of the Multi-View Stereo is to restore the target 3D geometric model from multi-perspective images. There are several problems with the existing approaches based on deep learning, such as missing the detailed information in the predicted depth map, the low surface accuracy, and the incomplete reconstructed 3D point cloud model. In order to overcome these problems, we propose the Attention-guided Multiview Stereo Network For 3D Depth Estimation(AG-MVSNet). We combine the camera geometry with the deep neural network. And we adopt the coarse-To-fine deep learning framework to restore the target 3D geometry model. High-quality detailed feature information has an important influence on multi-view 3D reconstruction, and reference images in the natural environment contain detailed feature information which is needed in the reconstruction process. Therefore, we use the detailed feature information from different scales of reference images to restore the lost details of the high-level features. The quantitative and qualitative experimental results show that the proposed algorithm is more complete than the common multi-view 3D reconstruction algorithms. © 2020 IEEE. |
| Author Keywords |
3D Reconstruction; Depth Map; Point cloud |