A workflow is a set of steps or tasks that model the execution of a process, e.g., protein annotation, invoice generation and composition of astronomical images. Workflow applications commonly require large computatio...
详细信息
ISBN:
(纸本)9783319579726;9783319579719
A workflow is a set of steps or tasks that model the execution of a process, e.g., protein annotation, invoice generation and composition of astronomical images. Workflow applications commonly require large computational resources. Hence, distributed computing approaches (such as Grid and Cloud computing) emerge as a feasible solution to execute them. Two important factors for executing workflows in distributed computing platforms are (1) workflow scheduling and (2) resource allocation. As a consequence, there is a myriad of workflow scheduling algorithms that map workflow tasks to distributed resources subject to task dependencies, time and budget constraints. In this paper, we present a taxonomy of workflow scheduling algorithms, which categorizes the algorithms into (1) best-effort algorithms (including heuristics, metaheuristics, and approximation algorithms) and (2) quality-of-service algorithms (including budget-constrained, deadline-constrained and algorithms simultaneously constrained by deadline and budget). In addition, a workflow engine simulator was developed to quantitatively compare the performance of scheduling algorithms.
In this paper, we study the problem of link scheduling for multi-hop wireless networks with per-flow delay constraints. Specifically, we are interested in algorithms that maximize the asymptotic decay-rate of the prob...
详细信息
ISBN:
(纸本)9781467307758
In this paper, we study the problem of link scheduling for multi-hop wireless networks with per-flow delay constraints. Specifically, we are interested in algorithms that maximize the asymptotic decay-rate of the probability with which the maximum end-to-end backlog among all flows exceeds a threshold, as the threshold becomes large. We provide both positive and negative results in this direction. By minimizing the drift of the maximum end-to-end backlog in the converge-cast on a tree, we design an algorithm, Largest-Weight-First(LWF), that achieves the optimal asymptotic decay-rate for the overflow probability of the maximum end-to-end backlog as the threshold becomes large. However, such a drift minimization algorithm may not exist for general networks. We provide an example in which no algorithm can minimize the drift of the maximum end-to-end backlog. Finally, we simulate the LWF algorithm together with a well known algorithm (the back-pressure algorithm) and a large-deviations optimal algorithm in terms of the sum-queue (the P-TREE algorithm) in converge-cast networks. Our simulation shows that our algorithm significantly performs better not only in terms of asymptotic decay-rate, but also in terms of the actual overflow probability.
The protection mechanism is very important in networks when working paths fall failure. In this paper, we propose a scheduling mechanism with n-to-m protection (n is the number of primary LSPs (Label Switching Path), ...
详细信息
In this paper we introduce the Starting-Energy Fair Queuing (SEFQ), a novel class of energy-aware scheduling algorithms aim to guarantee a target lifetime to mobile devices while providing proportional energy consumpt...
详细信息
ISBN:
(纸本)9781467313568
In this paper we introduce the Starting-Energy Fair Queuing (SEFQ), a novel class of energy-aware scheduling algorithms aim to guarantee a target lifetime to mobile devices while providing proportional energy consumption and time-constraint meeting to tasks. With the extension of fair queuing to the energy domain, energy consumption of CPU is managed in a way that each task is guaranteed a share of the average power of a fixed period. The simulation results show that the performance of time-sensitive tasks can be traded-off with their fairness.
In this paper, we describe the problem of developing scheduling algorithms for an environment of parallel cluster tools, which is a special case of the parallel unrelated machines problem. At first we will describe th...
详细信息
ISBN:
(纸本)9781424405008
In this paper, we describe the problem of developing scheduling algorithms for an environment of parallel cluster tools, which is a special case of the parallel unrelated machines problem. At first we will describe the problem under consideration in detail and then present our scheduling environment and the idea of using slow down factors to predict lot cycle times to evaluate schedules and parts of them. This article is more a conceptual kind of work containing mostly basic thoughts to illustrate facets of the problem and first solution ideas. Nonetheless the authors see a high potential in examining these questions. Little research has been done on that issue so far.
The paper presents a tool for the assessrnent of the capacity of railway networks. This tool includes a first definition level that can be considered as microscopic, which permits train-runs to he simulated in a numer...
详细信息
The paper presents a tool for the assessrnent of the capacity of railway networks. This tool includes a first definition level that can be considered as microscopic, which permits train-runs to he simulated in a numerical way, through the identification of the occupation of block sections;a second mesoscopic level then uses aggregated data (e.g., the run times between stations or the minimum admitted headways), which can be calculated automatically by the micro-simulator or entered directly by the user, as input. A scheduling algorithm, using the aggregated data, produces feasible timetables, optimised according to given quality parameters. Since the procedure is automated and the computation phase is rather quick, it can be used to generate sets of feasible timetables and to perform timetable-based capacity assessments. Its implementation, within a perturbation analysis, is based on a discrete-event simulation core. This core applies any perturbation (delay, accident, anomaly) to a given timetable, which is re-arranged in order to solve any traffic conflict and to assess the robustness of the timetable design. The effectiveness of the rescheduling algorithms, which can be used to simulate different strategies that could be adopted by dispatchers, is also evaluated.
This paper considers the problem of designing scheduling algorithms for multi-channel (e.g., OFDM) wireless downlink networks with n users/OFDM sub-channels. For this system, while the classical MaxWeight algorithm is...
详细信息
scheduling algorithms are essential in guaranteeing Quality of Service (QoS) provisioning. Through QoS guarantee, a network application could obtain services which fulfil its specific requirements such as less delay, ...
详细信息
State-of-the-art Solid State Disks (SSDs) and Non-Volatile Memory (NVM) systems have undergone severe technology shift and architectural changes in the last couple of years, and, in parallel, SSD internal architecture...
详细信息
State-of-the-art Solid State Disks (SSDs) and Non-Volatile Memory (NVM) systems have undergone severe technology shift and architectural changes in the last couple of years, and, in parallel, SSD internal architecture has dramatically changed; modern SSDs now employ multiple internal resources such as NVM chips and I/O buses in an attempt to achieve high internal parallelism in processing I/O requests. In addition, to reduce intrinsic NVM system management overheads, SSD firmware employs advanced memory control strategies such as finer-granular address mapping algorithms and concurrency methods. As a result of complex interactions among these different mechanisms, modern SSDs can be plagued by enormous performance variations depending on whether the underlying architectural complexities and NVM management overheads can be hidden or not. Designing a smart NVM controller is key hiding the architectural complexities and reducing the internal firmware overheads. To this end, we first model a multi-plane and multi-die NVM architecture, which is highly reconfigurable and aware of intrinsic latency variation imposed by diverse state-of-the-art NVM systems. This NVM model has been implemented as a high fidelity open-source simulator, capable of capturing cycle-level interactions between the many components in an SSD, which can be used for various high-level and low-level NVM performance analyses. Based on this architecture model, we then explore twenty four different concurrency methods implemented in NVM controllers, geared toward exploiting both system-level and NVM-level parallelism. Further, we quantitatively analyze the challenges, faced by PCI Express-based (PCIe) SSDs in getting NVM closer to CPU and question popular assumptions and expectations regarding storage-class SSDs through an extensive experimental ***, we present and discuss the significance of read performance degradations and write performance variations by performing comprehensive empirical exp
Machine Learning is on the rise and is transforming industries across the board, from climate forecasting to stock price evaluation. In this study, we explore the use of machine learning in real-Time scheduling algori...
详细信息
暂无评论