咨询与建议

看过本文的还看了

相关文献

该作者的其他文献

文献详情 >ZeroSCROLLS: A Zero-Shot Bench... 收藏
arXiv

ZeroSCROLLS: A Zero-Shot Benchmark for Long Text Understanding

作     者:Shaham, Uri Ivgi, Maor Efrat, Avia Berant, Jonathan Levy, Omer 

作者机构:The Blavatnik School of Computer Science Tel Aviv University Meta AI United States 

出 版 物:《arXiv》 (arXiv)

年 卷 期:2023年

核心收录:

主  题:Zero shot learning 

摘      要:We introduce ZeroSCROLLS, a zero-shot benchmark for natural language understanding over long texts, which contains only test and small validation sets, without training data. We adapt six tasks from the SCROLLS benchmark, and add four new datasets, including two novel information fusing tasks, such as aggregating the percentage of positive reviews. Using ZeroSCROLLS, we conduct a comprehensive evaluation of both open-source and closed large language models, finding that Claude outperforms ChatGPT, and that GPT-4 achieves the highest average score. However, there is still room for improvement on multiple open challenges in ZeroSCROLLS, such as aggregation tasks, where models struggle to pass the naive baseline. As the state of the art is a moving target, we invite researchers to evaluate their ideas on the live ZeroSCROLLS leaderboard. © 2023, CC BY.

读者评论 与其他读者分享你的观点

用户名:未登录
我的评分