Satellite-Terrestrial Hybrid Network (STHN) has become a focus for future communication architecture to accommodate more services and applications. STHN combined with Edge Computing (EC) could provide lower delay, fas...
详细信息
ISBN:
(数字)9781665459778
ISBN:
(纸本)9781665459778
Satellite-Terrestrial Hybrid Network (STHN) has become a focus for future communication architecture to accommodate more services and applications. STHN combined with Edge Computing (EC) could provide lower delay, faster transfer speed, and better information transmission. Meanwhile, microservice-based architecture greatly increases system flexibility, which fits the STHN. In this paper, to verify the efficiency of STHN with EC system, we develop a platform based on an open source EC architecture called KubeEdge. The platform not only helps terrestrial to deploy and manage microservice on satellites but also increases the flexibility of satellite service provision. Besides, in this platform, we design a microservice scheduling algorithm called Optimal Microservice scheduling with Adaptive Link Changes (OMS-ALC) to minimize communication delay. The experimental results show that OMS-ALC outperforms other solutions in end-to-end delay performance.
This paper aims at developing a tight schedulability analysis for real-time global gang scheduling, in which threads of each task subject to timing requirements are assigned to multiple processors in parallel (i.e., f...
详细信息
ISBN:
(数字)9781665453462
ISBN:
(纸本)9781665453462
This paper aims at developing a tight schedulability analysis for real-time global gang scheduling, in which threads of each task subject to timing requirements are assigned to multiple processors in parallel (i.e., following the rigid gang task model). Focusing on the RTA (Response Time Analysis) framework known to exhibit high schedulability performance for other task models, we address two following issues: i) how to generalize the existing RTA framework to gang scheduling and utilize existing RTA components of other task models for the generalized framework, and ii) how to incorporate important characteristics of gang scheduling into the RTA framework in a systematic way to minimize the framework's pessimism in judging schedulability. By addressing the issues, our RTA framework enables to derive tight schedulability analysis for EDF, FP and potentially more scheduling algorithms for real-time global gang scheduling. Also, our simulation results demonstrate that the proposed RTA framework outperforms/complements existing studies for real-time global/non-global gang scheduling, in terms of schedulability performance.
Uncovering bugs in concurrent programs is a challenging problem owing to the exponentially large search space of thread interleavings. Past approaches towards concurrency testing are either optimistic - relying on ran...
详细信息
ISBN:
(纸本)9798400703850
Uncovering bugs in concurrent programs is a challenging problem owing to the exponentially large search space of thread interleavings. Past approaches towards concurrency testing are either optimistic - relying on random sampling of these interleavings - or pessimistic - relying on systematic exploration of a reduced (bounded) search space. In this work, we suggest a fresh, pragmatic solution neither focused only on formal, systematic testing, nor solely on unguided sampling or stress-testing approaches. We employ a biased random search which guides exploration towards neighborhoods which will likely expose new behavior. As such it is thematically similar to greybox fuzz testing, which has proven to be an effective technique for finding bugs in sequential programs. To identify new behaviors in the domain of interleavings, we prune and navigate the search space using the "reads-from" relation. Our approach is significantly more efficient at finding bugs per schedule exercised than other state-of-the art concurrency testing tools and approaches. Experiments on widely used concurrency datasets also show that our greybox fuzzing inspired approach gives a strict improvement over a randomized baseline scheduling algorithm in practice via a more uniform exploration of the schedule space. We make our concurrency testing infrastructure "Reads-From Fuzzer" (RFF) available for experimentation and usage by the wider community to aid future research.
This study addresses the challenge of deploying robotic software with Quality of Service (QoS) constraints in Edge-Cloud computing clusters. The paper introduces HEFT4K, an event-driven scheduling method tailored for ...
详细信息
ISBN:
(纸本)9798350377712;9798350377705
This study addresses the challenge of deploying robotic software with Quality of Service (QoS) constraints in Edge-Cloud computing clusters. The paper introduces HEFT4K, an event-driven scheduling method tailored for Kubernetes-managed systems based on the Heterogeneous Early Finish Time (HEFT) algorithm. This algorithm reduces software execution time (makespan) and facilitates re-mapping in case of node failures, involving only essential containers to maintain uninterrupted robot functionality. Experimental results, conducted on a real-world robot and synthetic benchmarks, show a 75% speedup in makespan compared to the standard Kubernetes scheduler, enhancing the efficiency of QoS-focused scheduling for robotic applications in distributed systems.
The resource-sharing constraints can be imposed by limiting the maximum allowable number of components for individual functional units in the scheduling process. However, the sharing of the functional units is not exp...
详细信息
ISBN:
(纸本)9781665484855
The resource-sharing constraints can be imposed by limiting the maximum allowable number of components for individual functional units in the scheduling process. However, the sharing of the functional units is not explicitly considered in the scheduling procedure. In this paper, we propose a SATbased scheduling algorithm for high-level synthesis considering the resource-sharing problem. Several pruning strategies have been proposed to reduce the search
Calculation of many-body correlation functions is one of the critical kernels utilized in many scientific computing areas, especially in Lattice Quantum Chromodynamics (Lattice QCD). It is formalized as a sum of a lar...
详细信息
ISBN:
(纸本)9781665481069
Calculation of many-body correlation functions is one of the critical kernels utilized in many scientific computing areas, especially in Lattice Quantum Chromodynamics (Lattice QCD). It is formalized as a sum of a large number of contraction terms each of which can be represented by a graph consisting of vertices describing quarks inside a hadron node and edges designating quark propagations at specific time intervals. Due to its computation- and memory-intensive nature, real-world physics systems (e.g., multi-meson or multi-baryon systems) explored by Lattice QCD prefer to leverage multi-GPUs. Different from general graph processing, many-body correlation function calculations show two specific features: a large number of computation/data-intensive kernels and frequently repeated appearances of original and intermediate data. The former results in expensive memory operations such as tensor movements and evictions. The latter offers data reuse opportunities to mitigate the dataintensive nature of many-body correlation function calculations. However, existing graph-based multi-GPU schedulers cannot capture these data-centric features, thus resulting in a sub-optimal performance for many-body correlation function calculations. To address this issue, this paper presents a multi-GPU scheduling framework, MICCO, to accelerate contractions for correlation functions particularly by taking the data dimension (e.g., data reuse and data eviction) into account. This work first performs a comprehensive study on the interplay of data reuse and load balance, and designs two new concepts: local reuse pattern and reuse bound to study the opportunity of achieving the optimal trade-off between them. Based on this study, MICCO proposes a heuristic scheduling algorithm and a machine-learning-based regression model to generate the optimal setting of reuse bounds. Specifically, MICCO is integrated into a real-world Lattice QCD system, Redstar, for the first time running on multiple GPUs.
Over recent years, reinforcement learning has become a prominent method for the optimization of sequential decision-making problems. One group of sequential decision-making problems that has benefited significantly fr...
详细信息
ISBN:
(纸本)9783031683220;9783031683237
Over recent years, reinforcement learning has become a prominent method for the optimization of sequential decision-making problems. One group of sequential decision-making problems that has benefited significantly from reinforcement-learning-based optimization techniques is scheduling problems. However, most existing reinforcement learning works on scheduling optimization aim at optimizing a single, makespan-based objective. While the makespan-the overall time from the start of the first task to the end of the last task-is indeed important in some endeavors, other endeavors benefit more from the optimization of other types of objectives. In this work, we focus on Tardiness-based objectives and present a new reward scheme that aims at simultaneously optimizing multiple notions of Tardiness.
With the wide development of intelligent communication systems, efficient data transmission is critical to fast edge learning in multi-user multiple-input multiple-output (MIMO) systems since the data acquisition from...
详细信息
ISBN:
(数字)9781665459778
ISBN:
(纸本)9781665459778
With the wide development of intelligent communication systems, efficient data transmission is critical to fast edge learning in multi-user multiple-input multiple-output (MIMO) systems since the data acquisition from massive edge devices has become a bottleneck. To cope with the mismatch between the empirical probability of the transmitted data and the expected one, this paper first proposes to quantify data importance using the Kullback-Leibler divergence. Then, we design a multi-user scheduling criterion that combines the channel state information and data importance indicators, followed by an iterative multi-user scheduling algorithm. Finally, experimental results demonstrate that the proposed multi-user scheduling strategy significantly improves the learning efficiency and the test accuracy of edge learning systems.
Energy management is emerging as an important issue for High performance computing (HPC) owning to high operational cost and low reliability. Compared with low-power architectural approach, energy-aware scheduling bas...
详细信息
Energy management is emerging as an important issue for High performance computing (HPC) owning to high operational cost and low reliability. Compared with low-power architectural approach, energy-aware scheduling based on Dynamic voltage scaling (DVS) and Dynamic power management (DPM) is regarded as a promising way since it is practical and low-cost. At present, most studies focus on pure DVS or non-DVS environment, while most high performance computing systems are hybrid non-DVS/DVS platforms. We propose an energy-aware scheduling algorithm for parallel application to consider both DVS and non-DVS characteristics of hybrid system. We present the rule of task assignment, make analysis on DVS and DPM technique and give their mathematical formulation, which maintains makespan optimization and energy conservation. The clustering and merging algorithm, and priority computation method consider the situation of resource constraints. The extensive simulations demonstrate that the proposed algorithm has stronger ability of energy saving and time optimization than Heterogeneous earliest finish time (HEFT), Energy-efficient task duplication scheduling (EETDS) and Heterogeneous energy-aware duplication scheduling (HEADUS) algorithm no matter for synthetic workload or realistic workload.
Jobshop scheduling is a classic instance in the field of production scheduling. Solving and optimizing the scheduling problem of the jobshop can greatly reduce the production cost of the workshop and improve the proce...
详细信息
Jobshop scheduling is a classic instance in the field of production scheduling. Solving and optimizing the scheduling problem of the jobshop can greatly reduce the production cost of the workshop and improve the processing efficiency, thereby improving the market competitiveness of the manufacturing enterprises. In order to make decisions on the complex dynamic scheduling process more accurately and simplify the solution process, the jobshop scheduling problem can be transformed into a reinforcement learning problem based on the Markov decision process. The performance of the adaptive scheduling algorithm in a dynamic manufacturing environment is improved based on the Deep Q Network (DQN). In the proposed scheduling algorithm, five state features of continuous value ranges are designed for input to a Deep Neural Network (DNN), as well as ten well-known heuristic dispatching rules are selected as the action set of the DQN. In the proposed scheduling algorithm, the target network and the prediction network are used to train the parameters. An action selection strategy based on the "softmax" function is designed in DQN. It selects dispatching rules with the largest action value as the execution action, thereby solving the problem that the suboptimal action value is greater than the optimal action Q value in the early learning stage. Furthermore, the non-optimal action is selected with a greater probability in the later learning stage. Ten benchmark jobshop test instances called "LA" used as simulation objects and operated in a simulation environment composed of Python. The simulation results confirm that the proposed scheduling algorithm based on DQN has better performance and universality than a single dispatching rule or traditional Q learning algorithm.
暂无评论