In recent years, systems researchers have devoted considerable effort to the study of large-scale graphprocessing. Existing distributed graphprocessing systems such as Pregel, based solely on distributed memory for ...
详细信息
ISBN:
(纸本)9781450337236
In recent years, systems researchers have devoted considerable effort to the study of large-scale graphprocessing. Existing distributed graphprocessing systems such as Pregel, based solely on distributed memory for their computations, fail to provide seamless scalability when the graphdata and their intermediate computational results no longer fit into the memory;and most distributed approaches for iterative graph computations do not consider utilizing secondary storage a viable solution. This paper presents graphMap, a distributed iterative graph computation framework that maximizes access locality and speeds up distributed iterative graph computations by effectively utilizing secondary storage. graphMap has three salient features: (1) It distinguishes data states that are mutable during iterative computations from those that are read-only in all iterations to maximize sequential access and minimize random access. (2) It entails a two-level graph partitioning algorithm that enables balanced workloads and locality-optimized data placement. (3) It contains a proposed suite of locality-based optimizations that improve computational efficiency. Extensive experiments on several real-world graphs show that graphMap outperforms existing distributed memory-based systems for various iterative graph algorithms.
Recent advancements in Natural Language processing (NLP) through pre-trained language models (PLMs) have significantly enhanced various computational tasks. However, their application to graph-structured data, particu...
详细信息
ISBN:
(纸本)9789819756711;9789819756728
Recent advancements in Natural Language processing (NLP) through pre-trained language models (PLMs) have significantly enhanced various computational tasks. However, their application to graph-structured data, particularly in capturing detailed structural information, remains challenging. Traditional approaches integrating node sub-graph information with transformer architectures have shown promise but suffer from computational inefficiencies and potential compromises in model adaptability due to extensive fine-tuning requirements. These requirements can limit knowledge transfer capabilities and the handling of natural language and graphdata simultaneously. This paper introduces the graph Transformer Adapter (GTA), a novel method that synergizes the strengths of PLMs with graph-structured data to refine graph node representations. GTA utilizes an innovative adapter mechanism that maintains the original PLM parameters unchanged, enhancing training efficiency and reducing computational demands while preserving the integrity of the original model. Extensive testing across various datasets has proven GTA's superior ability to manage graph-structured data effectively, showcasing its potential to leverage NLP advancements for improving graph node representations.
暂无评论