版权所有:内蒙古大学图书馆 技术提供:维普资讯• 智图
内蒙古自治区呼和浩特市赛罕区大学西街235号 邮编: 010021
作者机构:Southwest Petr Univ Sch Comp Sci & Software Engn Chengdu Peoples R China Southwest Petr Univ Intelligent Oil & Gas Lab Chengdu Peoples R China Chinese Acad Sci Inst Opt & Elect Chengdu Peoples R China Southwest Petr Univ Sch Elect Engn & Informat Chengdu Peoples R China Sichuan Police Coll Dept Traff Management Luzhou Peoples R China
出 版 物:《JOURNAL OF ELECTRONIC IMAGING》 (J. Electron. Imaging)
年 卷 期:2024年第33卷第6期
核心收录:
学科分类:0808[工学-电气工程] 1002[医学-临床医学] 0809[工学-电子科学与技术(可授工学、理学学位)] 08[工学] 0702[理学-物理学]
基 金:Opening Project of Intelligent Policing Key Laboratory of Sichuan Province [ZNJW2024KFMS003, ZNJW2023KFZD003] Open Fund of State Key Laboratory of Oil and Gas Reservoir Geology and Exploitation (Southwest Petroleum University) [PLN2022-51, PLN2021-21] high-performance computing platform of Southwest Petroleum University
主 题:3D reconstruction single-view reconstruction texture features computer graphics features fusion
摘 要:The accurate reconstruction of topology and texture details of three-dimensional (3D) objects from a single two-dimensional image presents a significant challenge in the field of computer vision. Existing methods have achieved varying degrees of success by utilizing different geometric representations, but they all suffer from limitations when accurately reconstructing surfaces with complex topology and texture. Therefore, this study proposes an approach that combines the convolutional block attention module (CBAM), texture detail fusion, and multimodal fusion to address this challenge effectively. To enhance the model s focus on important areas within images, we integrate the CBAM mechanism with ResNet for feature extraction. Texture detail fusion plays a crucial role as it effectively captures changes in the object s surface while multimodal fusion improves the accuracy of predicting the signed distance function. We have developed an implicit single-view 3D reconstruction network capable of retrieving topology and surface details of 3D models from a single input image. The integration of global, local, and surface texture features is a significant advancement that improves shape representation and accurately captures surface textures, filling a crucial gap in the field. During the process of reconstruction, we extract features that represent global information, local information, and texture variation information from the input image. By utilizing global information to approximate the shape of the object, refining shape and surface texture details through the utilization of local information, and applying distinct loss terms to constrain various aspects of reconstruction, our method achieves accurate single-image 3D model reconstruction with detailed surface textures. Through qualitative and quantitative analysis, we demonstrate the superiority of our model over state-of-the-art techniques on the ShapeNet dataset. The significance of our work lies in its ability t