咨询与建议

看过本文的还看了

相关文献

该作者的其他文献

文献详情 >Large-scale Evaluation of Tran... 收藏
arXiv

Large-scale Evaluation of Transformer-based Article Encoders on the Task of Citation Recommendation

作     者:Medić, Zoran Šnajder, Jan 

作者机构:Text Analysis and Knowledge Engineering Lab Faculty of Electrical Engineering and Computing University of Zagreb Unska 3 Zagreb10000 Croatia 

出 版 物:《arXiv》 (arXiv)

年 卷 期:2022年

核心收录:

主  题:Signal encoding 

摘      要:Recently introduced transformer-based article encoders (TAEs) designed to produce similar vector representations for mutually related scientific articles have demonstrated strong performance on benchmark datasets for scientific article recommendation. However, the existing benchmark datasets are predominantly focused on single domains and, in some cases, contain easy negatives in small candidate pools. Evaluating representations on such benchmarks might obscure the realistic performance of TAEs in setups with thousands of articles in candidate pools. In this work, we evaluate TAEs on large benchmarks with more challenging candidate pools. We compare the performance of TAEs with a lexical retrieval baseline model BM25 on the task of citation recommendation, where the model produces a list of recommendations for citing in a given input article. We find out that BM25 is still very competitive with the state-of-the-art neural retrievers, a finding which is surprising given the strong performance of TAEs on small benchmarks. As a remedy for the limitations of the existing benchmarks, we propose a new benchmark dataset for evaluating scientific article representations: Multi-Domain Citation Recommendation dataset (MDCR), which covers different scientific fields and contains challenging candidate pools. © 2022, CC BY.

读者评论 与其他读者分享你的观点

用户名:未登录
我的评分