咨询与建议

看过本文的还看了

相关文献

该作者的其他文献

文献详情 >Group-pair deep feature learni... 收藏

Group-pair deep feature learning for multi-view 3d model retrieval

作     者:Chen, Xiuxiu Liu, Li Zhang, Long Zhang, Huaxiang Meng, Lili Liu, Dongmei 

作者机构:Shandong Normal Univ Sch Informat Sci & Engn Jinan Peoples R China 

出 版 物:《APPLIED INTELLIGENCE》 (应用智能)

年 卷 期:2022年第52卷第2期

页      面:2013-2022页

核心收录:

学科分类:08[工学] 0812[工学-计算机科学与技术(可授工学、理学学位)] 

基  金:National Natural Science Foundation of China [61702310, 62076153] major fundamental research project of Shandong, China [ZR2019ZD03] Taishan Scholar Project of Shandong, China 

主  题:Deep feature learning Margin Center Loss 3D model retrieval 

摘      要:This paper employs Convolutional Neural Networks with pooling module to extract view descriptor of 3D model, and proposes the Group-Pair Deep Feature Learning method for multi-view 3D model retrieval. In the method, view descriptor is learned by the supervised autoencoder and multi-label discriminator to further mine the latent feature and category feature of 3D model. To enhance the discriminative capability of model features, we give the Margin Center Loss that minimizes the intra-class distance and maximize the inter-class distance. Experimental results on ModelNet10 and ModelNet40 datasets demonstrate that the proposed method significantly outperforms the state-of-the-art methods.

读者评论 与其他读者分享你的观点

用户名:未登录
我的评分