咨询与建议

看过本文的还看了

相关文献

该作者的其他文献

文献详情 >Representation learning of 3D ... 收藏

Representation learning of 3D meshes using an Autoencoder in the spectral domain

作     者:Lemeunier, Clement Denis, Florence Lavoue, Guillaume Dupont, Florent 

作者机构:Univ Lyon CNRS INSA Lyon UCBLLIRISUMR5205 F-69622 Villeurbanne France Univ Lyon UCBL CNRS INSA LyonLIRISUMR5205 F-69622 Villeurbanne France Univ Lyon Cent Lyon CNRS INSA LyonUCBLLIRISUMR5205ENISE F-42023 Saint Etienne France 

出 版 物:《COMPUTERS & GRAPHICS-UK》 (计算机与图形学)

年 卷 期:2022年第107卷

页      面:131-143页

核心收录:

学科分类:08[工学] 0835[工学-软件工程] 0812[工学-计算机科学与技术(可授工学、理学学位)] 

基  金:ANR  France project [Human4D ANR-19-CE23-0020] 

主  题:Geometric Deep Learning Spectral analysis Autoencoder Human body triangular meshes 

摘      要:Learning on surfaces is a difficult task: the data being non-Euclidean makes the transfer of known techniques such as convolutions and pooling non trivial. Common methods deploy processes to apply deep learning operations to triangular meshes either in the spatial domain by defining weights between nodes, or in the spectral domain using first order Chebyshev polynomials followed by a return in the spatial domain. In this study, we present a Spectral Autoencoder (SAE) enabling the application of deep learning techniques to 3D meshes by directly giving spectral coefficients obtained with a spectral transform as inputs. With a dataset composed of surfaces having the same connectivity, it is possible with the Graph Laplacian to express the geometry of all samples in the frequency domain. Then, by using an Autoencoder architecture, we are able to extract important features from spectral coefficients without going back to the spatial domain. Finally, a latent space is built from which reconstruction and interpolation is possible. This method allows the treatment of meshes with more vertices by keeping the same architecture, and allows to learn on big datasets with short computation times. Through experiments, we demonstrate that this architecture is able to give better results than state of the art methods in a faster way. (C) 2022 Elsevier Ltd. All rights reserved.

读者评论 与其他读者分享你的观点

用户名:未登录
我的评分