咨询与建议

看过本文的还看了

相关文献

该作者的其他文献

文献详情 >A Sequential Neural Encoder Wi... 收藏

A Sequential Neural Encoder With Latent Structured Description for Modeling Sentences

有为为句子建模的潜伏的结构化的描述的一个顺序的神经编码器

作     者:Ruan, Yu-Ping Chen, Qian Ling, Zhen-Hua 

作者机构:Univ Sci & Technol China Natl Engn Lab Speech & Language Informat Proc Hefei 230027 Anhui Peoples R China 

出 版 物:《IEEE-ACM TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING》 (IEEE音频与语言处理汇刊)

年 卷 期:2018年第26卷第2期

页      面:231-242页

核心收录:

学科分类:0808[工学-电气工程] 08[工学] 0702[理学-物理学] 

基  金:Science and Technology Development of Anhui Province, China [2014z02006] Fundamental Research Funds for the Central Universities [WK2350000001] Chinese Academy of Sciences [XDB02070006] National Natural Science Foundation of China [U1636201] 

主  题:Recurrent neural network long short-term memory sentence modeling syntax structure 

摘      要:In this paper, we propose a sequential neural encoder with latent structured description (SNELSD) for modeling sentences. This model introduces latent chunk-level representations into conventional sequential neural encoders, i.e., recurrent neural networks with long short-term memory (LSTM) units, to consider the compositionality of languages in semantic modeling. An SNELSD model has a hierarchical structure that includes a detection layer and a description layer. The detection layer predicts the boundaries of latent word chunks in an input sentence and derives a chunk-level vector for each word. The description layer utilizes modified LSTM units to process these chunk-level vectors in a recurrent manner and produces sequential encoding outputs. These output vectors are further concatenated with word vectors or the outputs of a chain LSTM encoder to obtain the final sentence representation. All the model parameters are learned in an end-to-end manner without a dependency on additional text chunking or syntax parsing. A natural language inference task and a sentiment analysis task are adopted to evaluate the performance of our proposed model. The experimental results demonstrate the effectiveness of the proposed SNELSD model on exploring task-dependent chunking patterns during the semantic modeling of sentences. Furthermore, the proposed method achieves better performance than conventional chain LSTMs and tree-structured LSTMs on both tasks.

读者评论 与其他读者分享你的观点

用户名:未登录
我的评分