Research indicates that incorporating external knowledge into pre-trained language models (PLMs) can enhance their performance on knowledge-driven downstream tasks. However, most approaches either require retraining t...
详细信息
ISBN:
(纸本)9789819794331;9789819794348
Research indicates that incorporating external knowledge into pre-trained language models (PLMs) can enhance their performance on knowledge-driven downstream tasks. However, most approaches either require retraining the model or hardly maintain complete information of knowledge graphs (KGs). In this paper, we introduce a simple but effective ProSide module for PLMs, which includes two components: Knowledge Projector and Knowledge Sideway. Knowledge Projector transforms knowledge representation from entity embedding in KG space to semantic space, while Knowledge Sideway retains complete KG information in sideway modules through pre-training. Our ProSide can accommodate different frozen language models, without the need for retraining them. Results on three common knowledge-driven tasks demonstrate that our ProSide method enhances model performance and reaches state-of-the-art level. Additionally, we provide further analysis and case studies to illustrate the mechanism from the perspective of representation space. The code and models are publicly available at Github (Code and models are publicly available at https://***/hcffffff/ProSide).
暂无评论