咨询与建议

看过本文的还看了

相关文献

该作者的其他文献

文献详情 >Stroke constrained attention n... 收藏

Stroke constrained attention network for online handwritten mathematical expression recognition

为联机手写的数学表达式识别摸抑制注意网络

作     者:Wang, Jiaming Du, Jun Zhang, Jianshu Wang, Bin Ren, Bo 

作者机构:Univ Sci & Technol China Natl Engn Lab Speech & Language Informat Proc Hefei Anhui Peoples R China Tencent Youtu Lab Shenzhen Peoples R China 

出 版 物:《PATTERN RECOGNITION》 (图形识别)

年 卷 期:2021年第119卷

页      面:108047-108047页

核心收录:

学科分类:0808[工学-电气工程] 08[工学] 0812[工学-计算机科学与技术(可授工学、理学学位)] 

基  金:MOE-Microsoft Key Laboratory of USTC Youtu Lab of Tencent 

主  题:Stroke-level information Multi-modal fusion Encoder-decoder Attention mechanism Handwritten mathematical expression recognition 

摘      要:In this paper, we propose a novel stroke constrained attention network (SCAN) which treats stroke as the basic unit for encoder-decoder based online handwritten mathematical expression recognition (HMER). Unlike previous methods which use trace points or image pixels as basic units, SCAN makes full use of stroke-level information for better alignment and representation. The proposed SCAN can be adopted in both single-modal (online or offline) and multi-modal HMER. For single-modal HMER, SCAN first employs a CNN-GRU encoder to extract point-level features from input traces in online mode and employs a CNN encoder to extract pixel-level features from input images in offline mode, then use stroke constrained information to convert them into online and offline stroke-level features. Using stroke-level features can explicitly group points or pixels belonging to the same stroke, therefore reduces the difficulty of symbol segmentation and recognition via the decoder with attention mechanism. For multi-modal HMER, other than fusing multi-modal information in decoder, SCAN can also fuse multi-modal information in encoder by utilizing the stroke based alignments between online and offline modalities. The encoder fusion is a better way for combining multi-modal information as it implements the information interaction one step before the decoder fusion so that the advantages of multiple modalities can be exploited earlier and more adequately. Besides, we propose an approach combining the encoder fusion and decoder fusion, namely encoder-decoder fusion, which can further improve the performance. Evaluated on a benchmark published by CROHME competition, the proposed SCAN achieves the state-of-the-art performance. Furthermore, by conducting experiments on an additional task: online handwritten Chinese character recognition (HCCR), we demonstrate the generality of our proposed method. (c) 2021 Elsevier Ltd. All rights reserved.

读者评论 与其他读者分享你的观点

用户名:未登录
我的评分