BGP data collection platforms as currently architected face fundamental challenges that threaten their long-term sustainability. Inspired by recent work, we analyze, prototype, and evaluate a new optimization paradigm...
详细信息
ISBN:
(纸本)9798400706141
BGP data collection platforms as currently architected face fundamental challenges that threaten their long-term sustainability. Inspired by recent work, we analyze, prototype, and evaluate a new optimization paradigm for BGP collection. Our system scales data collection with two components: analyzing redundancy between BGP updates and using it to optimize sampling of the incoming streams of BGP data. An appropriate definition of redundancy across updates depends on the analysis objective. Our contributions include: a survey, measurements, and simulations to demonstrate the limitations of current systems;a general framework and algorithms to assess and remove redundancy in BGP observations;and quantitative analysis of the benefit of our approach in terms of accuracy and coverage for several canonical BGP routing analyses such as hijack detection and topology mapping. Finally, we implement and deploy a new BGP peering collection system that automates peering expansion using our redundancy analytics, which provides a path forward for more thorough evaluation of this approach.
Continuous performance monitoring is critical for maintaining optimal performance of High-Performance computing resources. This is especially important for technological test bed systems, in which software updates occ...
详细信息
Intel Data Center GPU Max 1550, known as Ponte Vecchio (PVC), is a new Intel GPU architecture for high-performance computing. It is the basis of two systems on the June 2024 Top 500 list, Dawn (#51) and Aurora (#2).Th...
详细信息
The prediction of the resource consumption for the distributed training of deep learning models is of paramount importance, as it can inform a priori users how long their training would take and also enable users to m...
详细信息
The prediction of the resource consumption for the distributed training of deep learning models is of paramount importance, as it can inform a priori users how long their training would take and also enable users to manage the cost of training. Yet, no such prediction is available for users because the resource consumption itself varies significantly according to "settings" such as GPU types and also by "workloads" like deep learning models. Previous studies have aimed to derive or model such a prediction, but they fall short of accommodating the various combinations of settings and workloads together. This study presents Driple that designs graph neural networks to predict the resource consumption of diverse workloads. Driple also designs transfer learning to extend the graph neural networks to adapt to differences in settings. The evaluation results show that Driple can effectively predict a wide range of workloads and settings. At the same time, Driple can efficiently reduce the time required to tailor the prediction for different settings by up to 7.3x.
The relocation of computation from the network core to the edge where data is primarily generated has gained momentum, leading to the emergence of edge computing as a viable solution for low-latency processing. As a r...
详细信息
ISBN:
(纸本)9798400700828
The relocation of computation from the network core to the edge where data is primarily generated has gained momentum, leading to the emergence of edge computing as a viable solution for low-latency processing. As a result, edge computing has the potential to significantly reduce response times, decrease bandwidth usage, enhance energy efficiency, and offer various other benefits. At the same time, end-user devices do not offer a consistent computing platform and Internet middleboxes severely restrict communication with edge devices. Often, this is circumvented by publicly accessible relay servers, which cause additional latency and render time-critical tasks unviable for offloading. This paper presents an approach capable of addressing the complexities inherent in edge computing and facilitating good decisions regarding latency-aware computation offloading. We conducted several real-world experiments to evaluate our approach and provide valuable data for further research. Our findings show that edge offloading is competitive to cloud and grid offloading, as it effectively reduces latency. Empirical evidence from our research supports that edge computing can offer significant advantages for real-time applications.
systems involving artificial intelligence (AI) are protagonists in many everyday activities. Moreover, designers are increasingly implementing these systems for groups of users in various social and cooperative domain...
详细信息
The proceedings contain 10 papers. The topics discussed include: S-Cache: function caching for serverless edge computing;an autonomous resource management model towards cloud morphing;latency-aware scheduling for real...
ISBN:
(纸本)9798400700828
The proceedings contain 10 papers. The topics discussed include: S-Cache: function caching for serverless edge computing;an autonomous resource management model towards cloud morphing;latency-aware scheduling for real-time application support in edge computing;an evaluation of service mesh frameworks for edge systems;hot under the hood: an analysis of ambient temperature impact on heterogeneous edge platforms;lotus: serverless in-transit data processing for edge-based pub/sub;how to pipeline frame transfer and server inference in edge-assisted AR to optimize AR task accuracy?;cost-aware neural network splitting and dynamic rescheduling for edge intelligence;ESCEPE: early-exit network section-wise model compression using self-distillation and weight clustering;and an empirical study of resource-stressing faults in edge-computing applications.
The proceedings contain 37 papers. The topics discussed include: interfacial instability of liquid interphase improves molecular communication density;the thermal impact of THz signaling in protein nanonetworks;explor...
ISBN:
(纸本)9798400700347
The proceedings contain 37 papers. The topics discussed include: interfacial instability of liquid interphase improves molecular communication density;the thermal impact of THz signaling in protein nanonetworks;exploration of time reversal for wireless communications within computing packages;frequency analysis of a redox-based molecular-electrical communication channel;proteome fingerprinting as a localization scheme for nanobots;EIDA, a best effort equitable distributed ID assignment mechanism for heterogeneous dense nanonetworks;fine-tuned circuit representation of human vessels through reinforcement learning: a novel digital twin approach for hemodynamics;biophysical model for signal-embedded droplet soaking into 2D cell culture;frequency demodulation with magnetoelectric coreshells: a novel approach to enhanced bio-stimulation;and incoherent feedforward loop as a clock signal for synchronizing signals in biological systems.
Scientific research increasingly relies on distributed computational resources, storage systems, networks, and instruments, ranging from HPC and cloud systems to edge devices. Event-driven architecture (EDA) benefits ...
详细信息
Scientific productivity can be enhanced through workflow management tools, relieving large High Performance computing (HPC) system users from the tedious tasks of scheduling and designing the complex computational exe...
暂无评论