With the rapid development of information technology, people have put forward higher requirements for the audio-visual experience and usage functions of conference spaces. The traditional, single-function conference r...
详细信息
ISBN:
(纸本)9798350391961;9798350391954
With the rapid development of information technology, people have put forward higher requirements for the audio-visual experience and usage functions of conference spaces. The traditional, single-function conference room configuration can no longer adapt to the diverse needs of modern work and interactive activities. How to achieve efficient management and control, seamless interconnection, and resource sharing of audio systems across spaces, while ensuring the acoustic characteristics and flexibility of each independent space, has become a core issue that needs to be urgently resolved in building a multi-hall, multi-functional conference room cluster. Based on the audio system project of the Academic Center of the Communication University of China, this paper proposes a solution for a conference room cluster audio system based on a distributed architecture. This solution not only overcomes the limitations of traditional systems in terms of scalability, collaborative work, and resourcescheduling but also promotes lossless transmission, real-time processing, and adaptive configuration of audio signals. Thus, it ensures the consistency and high quality of the audio experience within the entire cluster, providing practical guidance for the design and optimization of future conference room audio systems.
The proceedings contain 5 papers. The topics discussed include: parallel and distributed frugal tracking of a quantile;defining the boundaries for endpoint congestion management in networks for high-performance comput...
ISBN:
(纸本)9798400706486
The proceedings contain 5 papers. The topics discussed include: parallel and distributed frugal tracking of a quantile;defining the boundaries for endpoint congestion management in networks for high-performance computing;accelerating application bulk synchronous writes in HPC environments;flying base station channel capacity;and eGossip: optimizing resource utilization in gossip-based clusters through eBPF.
This study is dedicated to the integration of big data analytics with edge computing, a critical need driven by the exponential growth of Internet of Things (IoT) technologies and smart device data. We introduce an op...
详细信息
The proceedings contain 12 papers. The special focus in this conference is on Job scheduling Strategies for parallel Processing. The topics include: Optimization of Execution Parameters of Moldable Ultrasoun...
ISBN:
(纸本)9783031226977
The proceedings contain 12 papers. The special focus in this conference is on Job scheduling Strategies for parallel Processing. The topics include: Optimization of Execution Parameters of Moldable Ultrasound Workflows Under Incomplete Performance Data;scheduling of Elastic Message Passing Applications on HPC systems;preface;on the Feasibility of Simulation-Driven Portfolio scheduling for Cyberinfrastructure Runtime systems;Improving Accuracy of Walltime Estimates in PBS Professional Using Soft Walltimes;re-making the Movie-Making Machine;using Kubernetes in Academic Environment: Problems and Approaches;AI-Job scheduling on systems with Renewable Power Sources;Toward Building a Digital Twin of Job scheduling and Power management on an HPC System;encoding for Reinforcement Learning Driven scheduling.
Data read and write tail latency in distributed storage systems affects the quality of service of applications. In this paper, we focus on requests with latency around 99.99th tail latency and design a critical window...
详细信息
ISBN:
(纸本)9798350350128
Data read and write tail latency in distributed storage systems affects the quality of service of applications. In this paper, we focus on requests with latency around 99.99th tail latency and design a critical window. By analyzing the target storage device distribution of requests in the critical window, we design a simple but effective cache space allocation method to optimize tail latency. Unlike traditional methods, it schedules target cache space allocation instead of requests. Since it does not change the processing of requests and I/Os, it reduces the extra time consumption incurred by the scheduling algorithm. At the same time, it solves the problems of lag, tail latency fluctuation, and high resource consumption of the load balancing-based tail latency guarantee algorithm on request scheduling. Finally, we verify the optimization effect of the method's tail-latency metrics.
Co-locating Latency-Critical (LC) and Best-Effort (BE) services in edge-clouds is expected to enhance resource utilization. However, this mixed deployment encounters unique challenges. Edge-clouds are heterogeneous, d...
详细信息
ISBN:
(纸本)9798400708435
Co-locating Latency-Critical (LC) and Best-Effort (BE) services in edge-clouds is expected to enhance resource utilization. However, this mixed deployment encounters unique challenges. Edge-clouds are heterogeneous, distributed, and resource-constrained, leading to intense competition for edge resources, making it challenging to balance fluctuating co-located workloads. Previous works in cloud datacenters are no longer applicable since they do not consider the unique nature of edges. Although very few works explicitly provide specific schemes for edge workload co-location, these solutions fail to address the major challenges simultaneously. In this paper, we propose Tango, a harmonious management and scheduling framework for Kubernetes-based edge-cloud systems with mixed services, to address these challenges. Tango incorporates novel components and mechanisms for elastic resource allocation and two traffic scheduling algorithms that effectively manage distributed edge resources. Tango demonstrates harmony not only in the compatible mixed services it supports, but also in the collaborative solutions that complement each other. Based on a backwards compatible design for Kubernetes, Tango enhances Kubernetes with automatic scaling and traffic scheduling capabilities. Experiments on large-scale hybrid edge-clouds, driven by real workload traces, show that Tango improves the system resource utilization by 36.9%, QoS-guarantee satisfaction rate by 11.3%, and throughput by 47.6%, compared to state-of-the-art approaches.
The proceedings contain 40 papers. The special focus in this conference is on parallel and distributed Computing. The topics include: Towards resource-Efficient DNN Deployment for Traffic Object Recognition: From Edge...
ISBN:
(纸本)9783031488023
The proceedings contain 40 papers. The special focus in this conference is on parallel and distributed Computing. The topics include: Towards resource-Efficient DNN Deployment for Traffic Object Recognition: From Edge to Fog;the Implementation of Battery Charging Strategy for IoT Nodes;subMFL: Compatible subModel Generation for Federated Learning in Device Heterogeneous Environment;towards a Simulation as a Service Platform for the Cloud-to-Things Continuum;cormas: The Software for Participatory Modelling and Its Application for Managing Natural resources in Senegal;Malleable APGAS Programs and Their Support in Batch Job Schedulers;task-Level Checkpointing for Nested Fork-Join Programs Using Work Stealing;making Uintah Performance Portable for Department of Energy Exascale Testbeds;Benchmarking the parallel 1D Heat Equation Solver in Chapel, Charm++, C++, HPX, Go, Julia, Python, Rust, Swift, and Java;parallel Auto-scheduling of Counting Queries in Machine Learning Applications on HPC systems;Energy Efficiency Impact of Processing in Memory: A Comprehensive Review of Workloads on the UPMEM Architecture;enhancing Supercomputer Performance with Malleable Job scheduling Strategies;a Performance Modelling-Driven Approach to Hardware resource Scaling;Adaptive HPC Input/Output systems;dynamic Allocations in a Hierarchical parallel Context;designing a Sustainable Serverless Graph Processing Tool on the Computing Continuum;diorthotis: A parallel Batch Evaluator for Programming Assignments;Experiences and Lessons Learned from PHYSICS: A Framework for Cloud Development with FaaS;improved IoT Application Placement in Fog Computing Through Postponement;high-Performance distributed Computing with Smartphones;blockchain-Based Decentralized Authority for Complex Organizational Structures management;Transparent Remote OpenMP Offloading Based on MPI;DAPHNE Runtime: Harnessing parallelism for Integrated Data Analysis Pipelines;exploring Factors Impacting Data Offloading Performance in
The proceedings contain 40 papers. The special focus in this conference is on parallel and distributed Computing. The topics include: Towards resource-Efficient DNN Deployment for Traffic Object Recognition: From Edge...
ISBN:
(纸本)9783031506833
The proceedings contain 40 papers. The special focus in this conference is on parallel and distributed Computing. The topics include: Towards resource-Efficient DNN Deployment for Traffic Object Recognition: From Edge to Fog;the Implementation of Battery Charging Strategy for IoT Nodes;subMFL: Compatible subModel Generation for Federated Learning in Device Heterogeneous Environment;towards a Simulation as a Service Platform for the Cloud-to-Things Continuum;cormas: The Software for Participatory Modelling and Its Application for Managing Natural resources in Senegal;Malleable APGAS Programs and Their Support in Batch Job Schedulers;task-Level Checkpointing for Nested Fork-Join Programs Using Work Stealing;making Uintah Performance Portable for Department of Energy Exascale Testbeds;Benchmarking the parallel 1D Heat Equation Solver in Chapel, Charm++, C++, HPX, Go, Julia, Python, Rust, Swift, and Java;parallel Auto-scheduling of Counting Queries in Machine Learning Applications on HPC systems;Energy Efficiency Impact of Processing in Memory: A Comprehensive Review of Workloads on the UPMEM Architecture;enhancing Supercomputer Performance with Malleable Job scheduling Strategies;a Performance Modelling-Driven Approach to Hardware resource Scaling;Adaptive HPC Input/Output systems;dynamic Allocations in a Hierarchical parallel Context;designing a Sustainable Serverless Graph Processing Tool on the Computing Continuum;diorthotis: A parallel Batch Evaluator for Programming Assignments;Experiences and Lessons Learned from PHYSICS: A Framework for Cloud Development with FaaS;improved IoT Application Placement in Fog Computing Through Postponement;high-Performance distributed Computing with Smartphones;blockchain-Based Decentralized Authority for Complex Organizational Structures management;Transparent Remote OpenMP Offloading Based on MPI;DAPHNE Runtime: Harnessing parallelism for Integrated Data Analysis Pipelines;exploring Factors Impacting Data Offloading Performance in
Elastic parallel applications that can change the number of processors while being executed promise improved application and system performance, allow new classes of data and event-driven highly dynamic parallel appli...
详细信息
ISBN:
(纸本)9783031226977;9783031226984
Elastic parallel applications that can change the number of processors while being executed promise improved application and system performance, allow new classes of data and event-driven highly dynamic parallel applications, as well as provide the possibility of predictive proactive fault tolerance via shrinkage in increasingly larger and more complex HPC systems, where the mean time between component failures is decreasing. There are several challenges for elastic application to become mainstream: 1) a clear understanding of programming models for elastic applications, 2) adequate support from message passing libraries, middleware, and resourcemanagementsystems (RMS), and 3) thorough investigation of scheduling algorithms. scheduling elastic jobs requires communication between running jobs and the RMS, keeping track of pending jobs, and prioritizing jobs to expand or shrink at a certain point in time. These challenges make the task of finding an optimal schedule challenging. We have proposed three different scheduling algorithms to schedule elastic applications along with six different candidate selection policies to prioritize the shrinkable applications and investigated their impact on system and application performance. We have studied the impact of workload characteristics and algorithms on performance. Our simulations results indicate that workload characteristics as well as the range of elasticity (flexibility) of the elastics applications impact the system and application performance.
Execution of heterogeneous workflows on high-performance computing (HPC) platforms present unprecedented resourcemanagement and execution coordination challenges for runtime systems. Task heterogeneity increases the ...
详细信息
ISBN:
(纸本)9783031226977;9783031226984
Execution of heterogeneous workflows on high-performance computing (HPC) platforms present unprecedented resourcemanagement and execution coordination challenges for runtime systems. Task heterogeneity increases the complexity of resource and execution management, limiting the scalability and efficiency of workflow execution. resource partitioning and distribution of tasks execution over portioned resources promises to address those problems but we lack an experimental evaluation of its performance at scale. This paper provides a performance evaluation of the Process management Interface for Exascale (PMIx) and its reference implementation PRRTE on the leadership-class HPC platform Summit, when integrated into a pilot-based run-time system called RADICAL-Pilot. We partition resources across multiple PRRTE distributed Virtual Machine (DVM) environments, responsible for launching tasks via the PMIx interface. We experimentally measure the workload execution performance in terms of task scheduling/launching rate and distribution of DVM task placement times, DVM startup and termination overheads on the Summit leadership-class HPC platform. Integrated solution with PMIx/PRRTE enables using an abstracted, standardized set of interfaces for orchestrating the launch process, dynamic process management and monitoring capabilities. It extends scaling capabilities allowing to overcome a limitation of other launching mechanisms (e.g., JSM/LSF). Explored different DVM setup configurations provide insights on DVM performance and a layout to leverage it. Our experimental results show that heterogeneous workload of 65,500 tasks on 2048 nodes, and partitioned across 32 DVMs, runs steady with resource utilization not lower than 52%. While having less concurrently executed tasks resource utilization is able to reach up to 85%, based on results of heterogeneous workload of 8200 tasks on 256 nodes and 2 DVMs.
暂无评论