Graph Convolutional Networks (GCNs) are widely used in various domains. However, training distributed full-batch GCNs on large-scale graphs poses challenges due to high communication overhead. This work presents a hyb...
详细信息
ISBN:
(纸本)9798350383461;9798350383454
Graph Convolutional Networks (GCNs) are widely used in various domains. However, training distributed full-batch GCNs on large-scale graphs poses challenges due to high communication overhead. This work presents a hybrid pre-post-aggregation approach and an integer quantization method to reduce communication costs. With these techniques, we develop a scalable distributed GCN training framework, SuperGNN, for supercomputers ABCI. Experimental results on multiple large graph datasets show that our method achieves a speedup of up to 6x compared with the state-of-the-art implementations, without sacrificing model accuracy.
Edge computing has transformed machine learning by using computing closer to the data sources, thereby reducing latency. The ever-increasing volume of data has necessitated forming clusters of edge devices, possibly w...
详细信息
In this paper, we propose a distributed and interactive Plug-in Electric Vehicle (PEV) charging scheduling approach, which is also combined with an optimal pricing strategy. This method tackles challenges such as fluc...
详细信息
Complex interactions within microservice architectures obfuscate the implications of individual services to high-level requirements. This becomes even more grave for multi-tenant and multi-vendor scenarios, like Edge ...
详细信息
ISBN:
(纸本)9798331539580
Complex interactions within microservice architectures obfuscate the implications of individual services to high-level requirements. This becomes even more grave for multi-tenant and multi-vendor scenarios, like Edge computing, where different stakeholders might specify opposing Service Level Objectives (SLOs), e.g., minimizing both energy consumption and response time. To avoid contradictions within SLOs and to infer how SLOs can be fulfilled, this paper presents a methodology that diffuses high-level SLOs into multiple lower levels of SLOs and parameter assignments. Thus, it becomes clear how individual sub-processes contribute to high-level SLOs, and how these must be configured to foster their fulfillment. We evaluated our methodology for several microservice pipelines, where the challenge is to ensure multiple high-level SLOs (e.g., customer satisfaction) by finding and constraining all influential factors. The results show that by inferring multiple layers of lower-level constraints, we can fulfill high-level SLOs up to 100%. Notably, we could extract that the restrictiveness of low-level SLOs and the occurrence of conflicts have a severe impact on SLO fulfillment.
Container technology has become a cornerstone of cloud computing, offering notable benefits such as enhanced resource utilization and streamlined deployment processes. The adoption of container technology by leading c...
详细信息
Microservices architecture is a promising approach for developing reusable scientific workflow capabilities for integrating diverse resources, such as experimental and observational instruments and advanced computatio...
详细信息
The MAC protocol ieee 802.15.4 DSME has features for WSNs to support exigent requirements such as high reliability and adaptability to dynamic traffic. This work introduces the concept of a virtual sink, which compris...
详细信息
ISBN:
(纸本)9781665495127
The MAC protocol ieee 802.15.4 DSME has features for WSNs to support exigent requirements such as high reliability and adaptability to dynamic traffic. This work introduces the concept of a virtual sink, which comprises the sink and its 1-hop neighbors, a.k.a. satellites, as the core of a strategy to alleviate the burden caused by the funneling effect in data collection scenarios. Our strategy enables the coexistence of a centralized scheduling algorithm at the virtual sink and a decentralized scheduling algorithm for the remaining nodes of the network. Through a simulative assessment, we compare the performance of the virtual sink-based strategy with the status quo of DSME via a decentralized slot scheduler TPS. Results show an improvement of the network throughput of up to 38% and a reduction of the energy consumption of about 30% at satellites.
We consider a scenario that utilizes road side units (RSUs) as distributed caches in connected vehicular networks. The goal of the use of caches in our scenario is for rapidly providing contents to connected vehicles ...
详细信息
ISBN:
(数字)9781665471770
ISBN:
(纸本)9781665471770
We consider a scenario that utilizes road side units (RSUs) as distributed caches in connected vehicular networks. The goal of the use of caches in our scenario is for rapidly providing contents to connected vehicles under various traffic conditions. During this operation, due to the rapidly changed road environment and user mobility, the concept of age-of-information (AoI) is considered for (1) updating the cached information as well as (2) maintaining the freshness of cached information. The frequent updates of cached information maintain the freshness of the information at the expense of network resources. Here, the frequent updates increase the number of data transmissions between RSUs and MBS;and thus, it increases system costs, consequently. Therefore, the tradeoff exists between the AoI of cached information and the system costs. Based on this observation, the proposed algorithm in this paper aims at the system cost reduction which is fundamentally required for content delivery while minimizing the content AoI, based on Markov Decision Process (MDP) and Lyapunov optimization.
The results of the stepwise computations are saved in memory slots, for possible reuse. The extended calculus reduces unnecessary iterations of assignments, which are chains of unnecessary assignments, by copying valu...
详细信息
ISBN:
(纸本)9783031820724;9783031820731
The results of the stepwise computations are saved in memory slots, for possible reuse. The extended calculus reduces unnecessary iterations of assignments, which are chains of unnecessary assignments, by copying values of terms from one memory slot to another, without any essential algorithmic changes. The primary applications of the chain-free type theory of recursion are for computational semantics of formal and natural languages, including programming languages and compilers.
The exponential growth of distributed cloud systems necessitates an intelligent form of workload management that ensures optimum performance, scalability, and reliability. Traditional approaches based on static or heu...
详细信息
暂无评论