咨询与建议

看过本文的还看了

相关文献

该作者的其他文献

文献详情 >Multimodal sensor fusion in th... 收藏
arXiv

Multimodal sensor fusion in the latent representation space

作     者:Piechocki, Robert J. Wang, Xiaoyang Bocus, Mohammud J. 

作者机构:School of Computer Science Electrical and Electronic Engineering and Engineering Maths University of Bristol BristolBS8 1UB United Kingdom 

出 版 物:《arXiv》 (arXiv)

年 卷 期:2022年

核心收录:

摘      要:A new method for multimodal sensor fusion is introduced. The technique relies on a two-stage process. In the first stage, a multimodal generative model is constructed from unlabelled training data. In the second stage, the generative model serves as a reconstruction prior and the search manifold for the sensor fusion tasks. The method also handles cases where observations are accessed only via subsampling i.e. compressed sensing. We demonstrate the effectiveness and excellent performance on a range of multimodal fusion experiments such as multisensory classification, denoising, and recovery from subsampled *** Codes 68Txx © 2022, CC BY.

读者评论 与其他读者分享你的观点

用户名:未登录
我的评分