咨询与建议

看过本文的还看了

相关文献

该作者的其他文献

文献详情 >Discovering fully semantic rep... 收藏

Discovering fully semantic representations via centroid- and orientation-aware feature learning

作     者:Cha, Jaehoon Park, Jinhae Pinilla, Samuel Morris, Kyle L. Allen, Christopher S. Wilkinson, Mark I. Thiyagalingam, Jeyan 

作者机构:Sci & Technol Facil Council Sci Comp Dept Didcot England Chungnam Natl Univ Dept Math Daejeon South Korea European Bioinformat Inst European Mol Biol Lab Electron Microscopy Data Bank Hinxton England Diamond Light Source Electron Phys Sci Imaging Ctr Didcot England Univ Oxford Dept Mat Oxford England Univ Leicester Dept Phys & Astron Leicester England 

出 版 物:《NATURE MACHINE INTELLIGENCE》 (Nat. Mach. Intell.)

年 卷 期:2025年第7卷第2期

页      面:307-314页

核心收录:

基  金:RCUK | Engineering and Physical Sciences Research Council (EPSRC) [EP/X019918/1] STFC Facilities Fund Chungnam National University Research Grant [EP/X019918/1] 

主  题:Computer science Galaxies and clusters Graphene Machine learning 

摘      要:Learning meaningful representations of images in scientific domains that are robust to variations in centroids and orientations remains an important challenge. Here we introduce centroid- and orientation-aware disentangling autoencoder (CODAE), an encoder-decoder-based neural network that learns meaningful content of objects in a latent space. Specifically, a combination of a translation- and rotation-equivariant encoder, Euler encoding and an image moment loss enables CODAE to extract features invariant to positions and orientations of objects of interest from randomly translated and rotated images. We evaluate this approach on several publicly available scientific datasets, including protein images from life sciences, four-dimensional scanning transmission electron microscopy data from material science and galaxy images from astronomy. The evaluation shows that CODAE learns centroids, orientations and their invariant features and outputs, as well as aligned reconstructions and the exact view reconstructions of the input images with high quality.

读者评论 与其他读者分享你的观点

用户名:未登录
我的评分