咨询与建议

看过本文的还看了

相关文献

该作者的其他文献

文献详情 >Improving Retrieval Augmented ... 收藏
arXiv

Improving Retrieval Augmented Open-Domain Question-Answering with Vectorized Contexts

作     者:Chen, Zhuo Wang, Xinyu Jiang, Yong Xie, Pengjun Huang, Fei Tu, Kewei 

作者机构:School of Information Science and Technology ShanghaiTech University China Shanghai Engineering Research Center of Intelligent Vision and Imaging China Institute for Intelligent Computing Alibaba Group China 

出 版 物:《arXiv》 (arXiv)

年 卷 期:2024年

核心收录:

主  题:Question answering 

摘      要:In the era of large language models, applying techniques such as Retrieval Augmented Generation can better address Open-Domain Question-Answering problems. Due to constraints including model sizes and computing resources, the length of context is often limited, and it becomes challenging to empower the model to cover overlong contexts while answering questions from open domains. This paper proposes a general and convenient method to cover longer contexts in Open-Domain Question-Answering tasks. It leverages a small encoder and cross-attention mechanism and effectively encodes contexts. With our method, the original language models can cover several times longer contexts while keeping the computing requirements close to the baseline. Our experiments demonstrate that after finetuning, there is improved performance across two held-in datasets, four held-out datasets, and also in two In Context Learning settings. Our code will be released at https://***/Alibaba-NLP/Vec-RA-ODQA. Copyright © 2024, The Authors. All rights reserved.

读者评论 与其他读者分享你的观点

用户名:未登录
我的评分