咨询与建议

看过本文的还看了

相关文献

该作者的其他文献

文献详情 >Regularized Optimal Transport ... 收藏

Regularized Optimal Transport Layers for Generalized Global Pooling Operations

作     者:Xu, Hongteng Cheng, Minjie 

作者机构:Renmin Univ Beijing Key Lab Big Data Management & Analy Meth Gaoling Sch Artificial Intelligence Beijing 100872 Peoples R China Renmin Univ Gaoling Sch Artificial Intelligence Beijing 100872 Peoples R China 

出 版 物:《IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE》 (IEEE Trans Pattern Anal Mach Intell)

年 卷 期:2023年第45卷第12期

页      面:15426-15444页

核心收录:

学科分类:0808[工学-电气工程] 08[工学] 0812[工学-计算机科学与技术(可授工学、理学学位)] 

基  金:National Natural Science Foundation of China [92270110, 62106271] CAAI-Huawei MindSpore Open Fund Fundamental Research Funds for the Central Universities Research Funds of Renmin University of China Beijing Key Laboratory of Big Data Management Major Innovation & Planning Interdisciplinary Platform for the "Double-First Class" Initiative 

主  题:Task analysis Optimization Neural networks Machine learning Feature extraction Solids Signal processing algorithms Global pooling regularized optimal transport Bregman ADMM Sinkhorn scaling set representation graph embedding 

摘      要:Global pooling is one of the most significant operations in many machine learning models and tasks, which works for information fusion and structured data (like sets and graphs) representation. However, without solid mathematical fundamentals, its practical implementations often depend on empirical mechanisms and thus lead to sub-optimal, even unsatisfactory performance. In this work, we develop a novel and generalized global pooling framework through the lens of optimal transport. The proposed framework is interpretable from the perspective of expectation-maximization. Essentially, it aims at learning an optimal transport across sample indices and feature dimensions, making the corresponding pooling operation maximize the conditional expectation of input data. We demonstrate that most existing pooling methods are equivalent to solving a regularized optimal transport (ROT) problem with different specializations, and more sophisticated pooling operations can be implemented by hierarchically solving multiple ROT problems. Making the parameters of the ROT problem learnable, we develop a family of regularized optimal transport pooling (ROTP) layers. We implement the ROTP layers as a new kind of deep implicit layer. Their model architectures correspond to different optimization algorithms. We test our ROTP layers in several representative set-level machine learning scenarios, including multi-instance learning (MIL), graph classification, graph set representation, and image classification. Experimental results show that applying our ROTP layers can reduce the difficulty of the design and selection of global pooling - our ROTP layers may either imitate some existing global pooling methods or lead to some new pooling layers fitting data better.

读者评论 与其他读者分享你的观点

用户名:未登录
我的评分