Computer resource sharing provides an effective solution to the problems of imbalance between the distribution of computer resources and computing power demand in the network and the waste of computer resources. In th...
详细信息
In this paper, we share our experience in teaching parallel algorithms with the binary-forking model. With hardware advances, multicore computers are now ubiquitous. This has created a substantial demand in both resea...
详细信息
ISBN:
(纸本)9798350364613;9798350364606
In this paper, we share our experience in teaching parallel algorithms with the binary-forking model. With hardware advances, multicore computers are now ubiquitous. This has created a substantial demand in both research and industry to harness the capabilities of parallelcomputing. It is thus important to incorporate parallelism in computer science education, especially in the early stages of the curriculum. However, it is commonly believed that understanding and using parallelism requires a deep understanding of computer systems and architecture, which complicates introducing parallelism to young students and non-experts. We propose to use the binary-forking model in teaching parallel algorithms, proposed by our previous research work. This model is meant to capture the performance of algorithms on modern multicore shared-memory machines, which is a simple abstraction to isolate algorithm design ideas with system-level details. The abstraction allows for simple analysis based on the work-span model in theory, and can be directly implemented as parallel programs in practice. In this paper, we briefly overview some basic primitives in this model, and provide a list of algorithms that we believe are well-suited in parallel algorithm courses.
We provide a comprehensive and updated assessment of Docker versus Docker in Docker (DinD), evaluating its impact on CPU, memory, disk, and network. Using different workloads, we evaluate DinD's performance across...
详细信息
ISBN:
(纸本)9798350363074;9798350363081
We provide a comprehensive and updated assessment of Docker versus Docker in Docker (DinD), evaluating its impact on CPU, memory, disk, and network. Using different workloads, we evaluate DinD's performance across distinct hardware platforms and GNU/Linux distributions on cloud Infrastructure as a Service (IaaS) platforms like Google Compute Engine (GCE) and traditional server-based environments. We developed an automated tools suite to achieve our goal. We execute four well-known benchmarks on Docker and its nested-container variant. Our findings indicate that nested-containers require up to 7 seconds for startup, while the Docker standard containers require less than 0.5 seconds for Debian and Alpine operating systems. Our results suggest that Docker containers based on Debian consistently outperform their Alpine counterparts, showing lower CPU latency. A key distinction among these Docker images lies in the varying number of installed libraries (e.g., stretching from 13 to 119) across different Linux distributions for the same system (e.g., MySQL). Furthermore, the number of events and CPU latency indicates that the influence of DinD over Docker proves that it is insignificant for both operating systems. In terms of memory, running containers of Debian-based images consume 20% more size of memory than those based on Alpine. No significant differences are between nested-containers and Dockers for disk and network IO. It is worth emphasizing that some of the disparities, such as a bigger memory footprint, appear to be a direct result of the software stack in use, including different kernel versions, libraries, and other essential packages.
Enterprises are the main body of new power system research and development, and scientific evaluation of enterprise contribution can help stimulate the innovation vitality and enthusiasm of enterprises, and ensure the...
详细信息
Autism Spectrum Disorder (ASD) presents substantial problems in cognitive, language, and social skills. Early detection is critical for effective intervention, spurring research into Artificial Intelligence (AI) appro...
详细信息
Arbitrary pattern formation (Apf) is a well-studied problem in swarm robotics. To the best of our knowledge, the problem has been considered in two different settings: one in a euclidean plane and another in an infini...
详细信息
作者:
Liu, KangkangChen, NingjiangGuangxi Univ
Coll Comp & Elect Informat Nanning Peoples R China Guangxi Univ
Educ Dept Guangxi Zhuang Autonomous Reg Key Lab Parallel Distributed & Intelligent Comp Nanning Peoples R China
With the increasing performance of deep convolutional neural networks, they have been widely used in many computer vision tasks. However, a huge convolutional neural network model requires a lot of memory and computin...
详细信息
ISBN:
(纸本)9798350349184;9798350349191
With the increasing performance of deep convolutional neural networks, they have been widely used in many computer vision tasks. However, a huge convolutional neural network model requires a lot of memory and computing resources, which makes it difficult to meet the requirements of low latency and reliability of edge computing when the model is deployed locally on resource-limited devices in edge environments. Quantization is a kind of model compression technology, which can effectively reduce model size, calculation cost and inference delay, but the quantization noise will cause the accuracy of the quantization model to decrease. Aiming at the problem of precision loss caused by model quantization, this paper proposes a post-training quantization method based on scale optimization. By reducing the influence of redundant parameters in the model on the quantization parameters in the process of model quantization, the scale factor optimization is realized to reduce the quantization error and thus improve the accuracy of the quantized model, reduce the inference delay and improve the reliability of edge applications. The experimental results show that under different quantization strategies and different quantization bit widths, the proposed method can improve the accuracy of the quantized model, and the absolute accuracy of the optimal quantization model is improved by 1.36%. The improvement effect is obvious, which is conducive to the application of deep neural network in edge environment.
With the growing installation of distributed PV at a low voltage level and the resulting concerns about good power quality, power facilities see a greater need for energy management programs, especially for flexible d...
详细信息
As the size of modern datasets exceeds the disk and memory capacities of a single computer, machine learning practitioners have resorted to parallel and distributedcomputing. Given that optimization is one of the pil...
详细信息
DG (distributed Generation) planning is a key technology in the construction of intelligent distribution networks. The large-scale integration of DGs into the distribution network will cause changes in its power flow ...
详细信息
暂无评论