Anti-fraud engineering for online credit loan (OCL) platforms is getting more challenging due to the developing specialization of gang fraud. Associations are critical features referring to assessing the credibility o...
详细信息
Anti-fraud engineering for online credit loan (OCL) platforms is getting more challenging due to the developing specialization of gang fraud. Associations are critical features referring to assessing the credibility of loan applications for OCL fraud prediction. State-of-the-art solutions employ graph-based methods to mine hidden associations among loan applications effectively. They perform well based on the information asymmetry which is guaranteed by the huge advantage of platforms over fraudsters in terms of data quantity and quality at their disposal. The inherent difficulty that can be foreseen is the data isolation caused by mistrust between multiple platforms and data control legislations for privacy preservation. To maintain the advantage owned by the platforms, we design a privacy-preserving distributed graph learning framework that ensures critical association repairs by merging parameter sharing and data sharing. Specially, we propose the association reconstruction mechanism (ARM) that consists of the devised exploration, processing, transmission and utilization schemes to realize data sharing. For parameter sharing, we design a hybrid encryption technique to protect privacy during collaboratively learninggraph neural network (GNN) models among different financial client platforms. We conduct the experiments over real-life data from large financial platforms. The results demonstrate the effectiveness and efficiency of our proposed methods.
graph Neural Networks (GNNs) have emerged as powerful tools for supervised machine learning over graph-structured data, while sampling-based node representation learning is widely utilized in unsupervised learning. Ho...
详细信息
ISBN:
(纸本)9798400704369
graph Neural Networks (GNNs) have emerged as powerful tools for supervised machine learning over graph-structured data, while sampling-based node representation learning is widely utilized in unsupervised learning. However, scalability remains a major challenge in both supervised and unsupervised learning for large graphs (e.g., those with over 1 billion nodes). The scalability bottleneck largely stems from the mini-batch sampling phase in GNNs and the random walk sampling phase in unsupervised methods. These processes often require storing features or embeddings in memory. In the context of distributed training, they require frequent, inefficient random access to data stored across different workers. Such repeated inter-worker communication for each mini-batch leads to high communication overhead and computational inefficiency. We propose graphScale, a unified framework for both supervised and unsupervised learning to store and process large graph data distributedly. The key insight in our design is the separation of workers who store data and those who perform the training. This separation allows us to decouple computing and storage in graph training, thus effectively building a pipeline where data fetching and data computation can overlap asynchronously. Our experiments show that graphScale outperforms state-of-the-art methods for distributed training of both GNNs and node embeddings. We evaluate graphScale both on public and proprietary graph datasets and observe a reduction of at least 40% in end-to-end training times compared to popular distributed frameworks, without any loss in performance. While most existing methods don't support billion-node graphs for training node embeddings, graphScale is currently deployed in production at TikTok enabling efficient learning over such large graphs.
暂无评论