咨询与建议

看过本文的还看了

相关文献

该作者的其他文献

文献详情 >Joint Dual Learning With Mutua... 收藏

Joint Dual Learning With Mutual Information Maximization for Natural Language Understanding and Generation in Dialogues

作     者:Su, Shang-Yu Chung, Yung-Sung Chen, Yun-Nung 

作者机构:Rakuten Institute of Technology Rakuten Tokyo158-0094 Japan Massachusetts Institute of Technology Electrical Engineering and Computer Science CambridgeMA02139 United States National Taiwan University Department of Computer Science and Information Engineering Taipei106319 Taiwan 

出 版 物:《IEEE/ACM Transactions on Audio Speech and Language Processing》 (IEEE ACM Trans. Audio Speech Lang. Process.)

年 卷 期:2024年第32卷

页      面:2445-2452页

核心收录:

学科分类:0810[工学-信息与通信工程] 0808[工学-电气工程] 08[工学] 0835[工学-软件工程] 0812[工学-计算机科学与技术(可授工学、理学学位)] 

基  金:This work was supported in part by the National Science and Technology Council (NSTC) through Google and theYoung Scholar Fellowship Program under Grant 111-2628-E-002-016  Grant 111-2222-E-002-013-MY3  and Grant 112-2223-E-002-012-MY5 

主  题:Semantics 

摘      要:Modular conversational systems heavily rely on the performance of their natural language understanding (NLU) and natural language generation (NLG) components. NLU focuses on extracting core semantic concepts from input texts, while NLG constructs coherent sentences based on these extracted semantics. Inspired by information theory in digital communication, we introduce a one-way communication model that mirrors human conversations, comprising two distinct phases: (1) the conversion of thoughts into messages, similar to NLG, and (2) the comprehension of received messages, similar to NLU. This paper presents a novel algorithm that trains NLU and NLG collaboratively by concatenating their models and maximizing mutual information between inputs and outputs. This approach efficiently facilitates the transmission of semantics, leading to enhanced learning performance for both components. Our experimental results, based on three benchmark datasets, consistently demonstrate significant improvements for both NLU and NLG tasks, highlighting the practical promise of our proposed method. © 2014 IEEE.

读者评论 与其他读者分享你的观点

用户名:未登录
我的评分