Deep learning based 3D reconstruction has been applied in many fields of daily life, but the vast majority of current research uses ordinary convolutions for feature extraction. Ordinary convolutions have limited ability to extract features from weakly textured and non textured areas, making them prone to blurring and blurring details, which affects the reconstruction results. Therefore, a depth based re extraction method is proposed. Firstly, in order to solve the extraction errors of ordinary convolution in low texture areas and improve reconstruction accuracy, an adaptive feature aggregation module is introduced, which utilizes the characteristics of variable convolution kernels to adaptively increase the receptive field of convolution kernels in low texture areas and reduce the receptive field in textured areas. Secondly, in order to aggregate information at different scales, enrich feature extraction information, and optimize the final reconstruction accuracy, a multi-space dilated convolution module was introduced. Finally, after comparing with multiple studies, the proposed method has significantly optimized feature extraction in low texture areas, and the final reconstruction accuracy has also been improved, with an overall improvement of 3.4%, making it suitable for most scenarios. |