版权所有:内蒙古大学图书馆 技术提供:维普资讯• 智图
内蒙古自治区呼和浩特市赛罕区大学西街235号 邮编: 010021
作者机构:School of Information Science and Technology ShanghaiTech University China Shanghai Engineering Research Center of Intelligent Vision and Imaging China Institute for Intelligent Computing Alibaba Group China
出 版 物:《arXiv》 (arXiv)
年 卷 期:2024年
核心收录:
摘 要:In the era of large language models, applying techniques such as Retrieval Augmented Generation can better address Open-Domain Question-Answering problems. Due to constraints including model sizes and computing resources, the length of context is often limited, and it becomes challenging to empower the model to cover overlong contexts while answering questions from open domains. This paper proposes a general and convenient method to cover longer contexts in Open-Domain Question-Answering tasks. It leverages a small encoder and cross-attention mechanism and effectively encodes contexts. With our method, the original language models can cover several times longer contexts while keeping the computing requirements close to the baseline. Our experiments demonstrate that after finetuning, there is improved performance across two held-in datasets, four held-out datasets, and also in two In Context Learning settings. Our code will be released at https://***/Alibaba-NLP/Vec-RA-ODQA. Copyright © 2024, The Authors. All rights reserved.