In this study, the optimal allocation of distributed energy resources (DERs) and shunt capacitors (SCs) in the distribution network (DN) is presented. Furthermore, the effect of existing voltage regulator (VR) device ...
详细信息
Optimal resource placement and service placement are key factors for resource utilization and service availability in mobile edge computing (MEC) systems. However, efficiently utilizing the resources of MEC servers fo...
详细信息
The OODA (Observe, Orient, Decide, Act) loop is a widely used decision-making model in various domains, including military operations, business strategy and cybersecurity. With the emergence of intelligent technologie...
详细信息
In the area of Mobile Edge computing (MEC), how to offload applications with QoS and fairness guarantees has been attracting increasing attention. Most existing application offloading strategies focus on reducing the ...
详细信息
High-performance computing communities are increasingly adopting Neural Networks (NN) as surrogate models in their applications to generate scientific insights. Replacing an execution phase in the application with NN ...
详细信息
ISBN:
(纸本)9798400701559
High-performance computing communities are increasingly adopting Neural Networks (NN) as surrogate models in their applications to generate scientific insights. Replacing an execution phase in the application with NN models can bring significant performance improvement. However, there is a lack of tools that can help domain scientists automatically apply NN-based surrogate models to HPC applications. We introduce a framework, named Auto-HPCnet, to democratize the usage of NN-based surrogates. Auto-HPCnet is the first end-to-end framework that makes past proposals for the NN-based surrogate model practical and disciplined. Auto-HPCnet introduces a workflow to address unique challenges when applying the approximation, such as feature acquisition and meeting the application-specific constraint on the quality of final computation outcome. We show that Auto-HPCnet can leverage NN for a set of HPC applications and achieve 5.50x speedup on average (up to 16.8x speedup and with data preparation cost included) while meeting the application-specific constraint on the final computation quality.
The Transformer model, which has significantly advanced natural language processing and computer vision, overcomes the limitations of recurrent neural networks and convolutional neural networks. However, it faces chal...
详细信息
Serverless computing has been favored by users and infrastructure providers from various industries, including online services and scientific computing. Users enjoy its auto-scaling and ease-of-management, and provide...
详细信息
ISBN:
(纸本)9798400701559
Serverless computing has been favored by users and infrastructure providers from various industries, including online services and scientific computing. Users enjoy its auto-scaling and ease-of-management, and providers own more control to optimize their service. However, existing serverless platforms still require users to pre-define resource allocations for their functions, leading to frequent misconfiguration by inexperienced users in practice. Besides, functions' varying input data further escalate the gap between their dynamic resource demands and static allocations, leaving functions either over-provisioned or under-provisioned. This paper presents Libra, a safe and timely resource harvesting framework for multi-node serverless clusters. Libra makes precise harvesting decisions to accelerate function invocations with harvested resources and jointly improve resource utilization by profiling dynamic resource demands and availability proactively. Experiments on OpenWhisk clusters with real-world workloads show that Libra reduces response latency by 39% and achieves 3x resource utilization compared to state-of-the-art solutions.
In the era of big data, efficiently processing and retrieving insights from unstructured data presents a critical challenge. This paper introduces a scalable leader-worker distributed data pipeline designed to handle ...
详细信息
Edge computing emerges as a stable and efficient solution for IoT data processing and analytics. With big data distributed engines to be deployed on edge infrastructures, users seek solutions to evaluate the performan...
详细信息
ISBN:
(纸本)9783031396977;9783031396984
Edge computing emerges as a stable and efficient solution for IoT data processing and analytics. With big data distributed engines to be deployed on edge infrastructures, users seek solutions to evaluate the performance of their analytics queries. In this paper, we introduce SparkEdgeEmu, an interactive framework designed for researchers and practitioners who need to inspect the performance of Spark analytic jobs without the edge topology setup burden. SparkEdgeEmu provides: (i) parameterizable template-based use cases for edge infrastructures, (ii) real-time emulated environments serving ready-to-use Spark clusters, (iii) a unified and interactive programming interface for the framework's execution and query submission, and (vi) utilization metrics from the underlying emulated topology as well as performance and quantitative metrics from the deployed queries. We evaluate the usability of our framework in a smart city use case and extract useful performance hints for the Apache Spark code execution.
Recent deep learning relies on large-scale training of Deep Neural Networks (DNNs), which can be time-consuming and computationally intensive. To improve DNN training efficiency, GPU clusters have been used to perform...
详细信息
暂无评论