咨询与建议

看过本文的还看了

相关文献

该作者的其他文献

文献详情 >Aligning enhanced feature repr... 收藏

Aligning enhanced feature representation for generalized zero-shot learning

作     者:Zhiyu FANG Xiaobin ZHU Chun YANG Hongyang ZHOU Jingyan QIN Xu-Cheng YIN 

作者机构:School of Computer & Communication EngineeringUniversity of Science and Technology Beijing 

出 版 物:《Science China(Information Sciences)》 (中国科学:信息科学(英文版))

年 卷 期:2025年第68卷第2期

页      面:74-88页

核心收录:

学科分类:12[管理学] 1201[管理学-管理科学与工程(可授管理学、工学学位)] 081104[工学-模式识别与智能系统] 08[工学] 0835[工学-软件工程] 0811[工学-控制科学与工程] 0812[工学-计算机科学与技术(可授工学、理学学位)] 

基  金:supported by National Science and Technology Major Project (Grant No. 2020AAA0109701) National Science Fund for Distinguished Young Scholars (Grant No. 62125601) National Natural Science Foundation of China(Grant No. 62076024) 

主  题:generalized zero-shot learning gated attention mechanism contrastive learning multi-modal alignment 

摘      要:Constructing an effective common latent embedding by aligning the latent spaces of cross-modal variational autoencoders(VAEs) is a popular strategy for generalized zero-shot learning(GZSL). However, due to the lack of fine-grained instance-wise annotations, existing VAE methods can easily suffer from the posterior collapse problem. In this paper, we propose an innovative asymmetric VAE network by aligning enhanced feature representation(AEFR) for GZSL. Distinguished from general VAE structures, we designed two asymmetric encoders for visual and semantic observations and one decoder for visual reconstruction. Specifically, we propose a simple yet effective gated attention mechanism(GAM) in the visual encoder for enhancing the information interaction between observations and latent variables, alleviating the possible posterior collapse problem effectively. In addition, we propose a novel distributional decoupling-based contrastive learning(D2-CL) to guide learning classification-relevant information while aligning the representations at the taxonomy level in the latent representation space. Extensive experiments on publicly available datasets demonstrate the state-of-the-art performance of our method. The source code is available at https://***/seeyourmind/AEFR.

读者评论 与其他读者分享你的观点

用户名:未登录
我的评分