One task-flow scheduler has been designed to optimally allocate resources for a task-flow based Aircraft Collaborative Design Application in Optical Grid. This scheduler and application has been deployed on an Optical...
详细信息
One task-flow scheduler has been designed to optimally allocate resources for a task-flow based Aircraft Collaborative Design Application in Optical Grid. This scheduler and application has been deployed on an Optical Grid testbed. The applications practical running demonstrates its feasibility.
Recently, the paradigm of pre-training and fine-tuning has achieved impressive performance owing to their ability to transfer general knowledge from pre-trained domain to target domain. Meanwhile, Graph Neural Network...
Recently, the paradigm of pre-training and fine-tuning has achieved impressive performance owing to their ability to transfer general knowledge from pre-trained domain to target domain. Meanwhile, Graph Neural Networks (GNNs) have gained prominence in recommender systems. However, there is a lack of unified pre-training and fine-tuning paradigms in graph-based recommendation systems. Applying pre-training and fine-tuning in graph-based recommendation is challenging due to the unique characteristics of recommendation data, including the non-uniform representation, negative transfer effects, and skewed data distributions. To overcome these challenges, we introduce ProRec (Pre-training and prompting Recommendation), a novel model that synergizes uniform graph pre-training with prompt-tuning for recommendation systems. Specifically, to address the challenge of inconsistent features across different recommendation datasets, ProRec constructs unified input features at the subgraph level and uses a Graph Auto-Encoder for pre-training, laying the foundation for uniform knowledge transfer from the pre-trained domain to the downstream domain. Additionally, ProRec employs prompt-tuning during the fine-tuning phase, which, in a parameter-efficient manner, enhancing the generalization of pre-trained knowledge to downstream tasks and thereby reducing negative transfer effects. Furthermore, a cross-layer contrastive learning strategy is adopted to eliminate uneven data distribution, promoting more evenly distributed and informative representations. Finally, extensive benchmark comparisons have demonstrated that ProRec outperforms the latest state-of-the-art methods. The source code necessary for replication is available at https://***/Code2Q/ProRec.
暂无评论