咨询与建议

看过本文的还看了

相关文献

该作者的其他文献

文献详情 >Indonesian Abstractive Text Su... 收藏
IAENG International Journal of Computer Science

Indonesian Abstractive Text Summarization Using Stacked Embeddings and Transformer Decoder

作     者:Winarko, Edi Tanoto, Luis Reza, Muhammad Haidar 

作者机构:Department of Computer Science and Electronics Universitas Gadjah Mada Indonesia 

出 版 物:《IAENG International Journal of Computer Science》 (IAENG Int. J. Comput. Sci.)

年 卷 期:2025年第52卷第4期

页      面:1051-1061页

核心收录:

基  金:This work is supported in part by the Department of Computer Science and Electronics  Faculty of Mathematics and Natural Sciences  UGM  Schema B Research Grant No. 3328/UN1/FMIPA.1.3/KP/PT.01.03/2024 

主  题:Decoding 

摘      要:Document summarization can be categorized into two categories: extractive and abstractive summarization. Research in abstractive summarization is more limited than that of extractive summarization, especially for Indonesian documents. Most existing studies in Indonesian abstractive summarization rely on a single embedding approach in their encoder. This study aims to develop an abstractive Indonesian document summarization model using stacked embedding as an encoder and a Transformer-based decoder. Stacked embeddings offer the advantage of capturing a more comprehensive range of linguistic features, enhancing the model’s ability to generalize across different word forms and morphological variations. The stacked embedding combines Bidirectional Encoder Representation from Transformers (BERT), Byte Pair Embedding (BPE), Character Embedding (CE), and FastText (FT). We conduct experiments to find the effect of BERT layer selection and various stacked embedding as an encoder in the proposed summarization model. Using the Liputan6 dataset, the experimental results show that using all layers of BERT as an encoder gives the best performance for summarization. In addition, the stacked embedding of BERT, CE, and BPE gives the highest F1 score of 35.58 (ROUGE-1), 15.40 (ROUGE-2), and 32.80 (ROUGE-L) when trained with 50,000 data. In contrast, when trained with 75,000 data, the stacked embedding performance is below BERT embedding, which has an F1 score of 37.18 (ROUGE-1), 18.19 (ROUGE-2), and 34.28 (ROUGE-L). Our proposed model achieves performance close to state-of-the-art models despite using less than 40% of the training data in Liputan6 dataset. © (2025), (International Association of Engineers). All rights reserved.

读者评论 与其他读者分享你的观点

用户名:未登录
我的评分