Recommender systems are widely employed to mitigate information overload by tailoring online content to individual preferences. Existing recommendation methods typically focus on optimizing the relevance between candi...
详细信息
Recommender systems are widely employed to mitigate information overload by tailoring online content to individual preferences. Existing recommendation methods typically focus on optimizing the relevance between candidate item content and user historical behaviors. However, these methods often neglect the quality of recommended content, which can negatively affect user experience and hinder the long-term growth of platforms. In fact, addressing this issue is particularly challenging, as signal on content quality feedback is typically sparse in the user interaction data (e.g., clicks) commonly used for model training. In this paper, we propose a human feedback alignment framework for recommender system (HFAR), which leverages well-aligned large language models s to simulate human feedback on content quality to enhance recommendation. Specifically, we propose a multi-task learning-based knowledge transfer framework to infuse recommendation models with an awareness of fine-grained feedback on content quality from targeted ***, we develop a contrastive learning-based feedback integration mechanism to embed targeted human feedback into the ranking strategy to enable quality-aware recommendation decision-making. Besides, we propose a multi-objective joint training framework to optimize the model jointly under utility and quality objectives. Experiments show that HFAR achieves a maximum improvement of 84.78% in recommendation quality, while maintaining both recommendation accuracy and efficiency.
暂无评论