As an environment-friendly substitute for conventional fuel-powered vehicles, EVs and their components have been widely developed and deployed worldwide. The large-scale integration of EVs into power grid brings both ...
详细信息
As an environment-friendly substitute for conventional fuel-powered vehicles, EVs and their components have been widely developed and deployed worldwide. The large-scale integration of EVs into power grid brings both challenges and opportunities to the system performance. On one hand, the load demand from EV charging imposes a heavey impact on the stability and efficiency of power grid. On the other hand, EVs could potentially act as mobile energy storage systems to improve the power network performance, such as load flattening, fast frequency control, and facilitating renewable energy integration. Evidently, uncontrolled EV charging could lead to inefficient power network operation or even security issues. This has spurred on enormous research interest in designing charging coordination mechanisms. A key design challenge here lies in the lack of complete knowledge of events that occur in the future. Indeed, the amount of knowledge of future events significantly impacts the design of efficient charging control algorithms. This article focuses on introducing online EV charging scheduling techniques that deal with different degrees of uncertainty and randomness of future knowledge. In addition, we highlight the promising future research directions for EV charging control.
Recent trends in big data have shown that the amount of data continues to increase at an exponential rate. This trend has inspired many researchers over the past few years to explore new research direction of studies ...
详细信息
Recent trends in big data have shown that the amount of data continues to increase at an exponential rate. This trend has inspired many researchers over the past few years to explore new research direction of studies related to multiple areas of big data. The widespread popularity of big data processing platforms using MapReduce framework is the growing demand to further optimize their performance for various purposes. In particular, enhancing resources and jobs scheduling are becoming critical since they fundamentally determine whether the applications can achieve the performance goals in different use cases. scheduling plays an important role in big data, mainly in reducing the execution time and cost of processing. This paper aims to survey the research undertaken in the field of scheduling in big data platforms. Moreover, this paper analyzed scheduling in MapReduce on two aspects: taxonomy and performance evaluation. The research progress in MapReduce scheduling algorithms is also discussed. The limitations of existing MapReduce scheduling algorithms and exploit future research opportunities are pointed out in the paper for easy identification by researchers. Our study can serve as the benchmark to expert researchers for proposing a novel MapReduce scheduling algorithm. However, for novice researchers, the study can be used as a starting point.
Motivated by applications in data center networks, in this paper, we study the problem of scheduling in an input queued switch. While throughput maximizing algorithms in a switch are well-understood, delay analysis wa...
详细信息
Motivated by applications in data center networks, in this paper, we study the problem of scheduling in an input queued switch. While throughput maximizing algorithms in a switch are well-understood, delay analysis was developed only recently. It was recently shown that the well-known MaxWeight algorithm achieves optimal scaling of mean queue lengths in steady state in the heavy-traffic regime, and is within a factor less than 2 of a universal lower bound. However, MaxWeight is not used in practice because of its high time complexity. In this paper, we study several low complexity algorithms and show that their heavy-traffic performance is identical to that of MaxWeight. We first present a negative result that picking a random schedule does not have optimal heavy-traffic scaling of queue lengths even under uniform traffic. We then show that if one picks the best among two matchings or modifies a random matching even a little, using the so-called flip operation, it leads to MaxWeight like heavy-traffic performance under uniform traffic. We then focus on the case of non-uniform traffic and show that a large class of low time complexity algorithms have the same heavy-traffic performance as MaxWeight, as long as it is ensured that a MaxWeight matching is picked often enough. We also briefly discuss the performance of these algorithms in the large scale heavy-traffic regime when the size of the switch increases simultaneously with the load. Finally, we perform empirical study on a new algorithm to compare its performance with some existing algorithms.
An IEEE 802.16 wireless system can provide broadband wireless access to subscriber stations and operate in mesh mode. The communication between a subscriber station and a base station can pass through one or more inte...
详细信息
An IEEE 802.16 wireless system can provide broadband wireless access to subscriber stations and operate in mesh mode. The communication between a subscriber station and a base station can pass through one or more intermediate subscriber stations. The IEEE 802.16 standard provides a centralized scheduling mechanism that supports contention-free and resource-guarantee transmission services in mesh mode. However, the corresponding algorithm to this schedule is quite primitive in the standard. In this paper, we propose a more efficient way to realize this schedule by maximizing channel utilization. Our designs are divided into two phases: routing and scheduling. First, a routing tree topology is constructed from a given mesh topology by our proposed tree construction algorithm. Secondly, we allocate channel resources to the edges in the routing tree by our proposed scheduling algorithm. To further support the quality-of-service schedule, we extend our designs by addressing some issues such as service class, admission control and fairness. Simulation results show the superiority of our proposed algorithms over others. Copyright (C) 2011 John Wiley & Sons, Ltd.
Task scheduling and resource utilization have always been among the most critical issues for high performance in heterogeneous computing. The heterogeneity of computation costs on a given set of computing elements and...
详细信息
Task scheduling and resource utilization have always been among the most critical issues for high performance in heterogeneous computing. The heterogeneity of computation costs on a given set of computing elements and the communication costs among computing elements increase the complexity of the scheduling problem. Extensive research proves that the list-based task scheduling algorithms generate the most efficient schedules for complex workflow applications in the heterogeneous computing environment. The workflow applications comprise thousands of interconnected tasks with dependencies. In the last decades, various list-based scheduling algorithms have been proposed to achieve some kinds of performance objectives such as minimization of makespan and energy consumption and maximization of resource utilization and reliability. In this article, various list-based workflow scheduling algorithms have been reviewed from the last two decades with the assumption of heterogeneous computing systems being used as the underlying computing infrastructure. This review process categorizes the algorithms based on scheduling objectives. For a better analysis of the algorithms, each algorithm is compared with other algorithms based on its objectives, merits, comparison metrics, workload type, experimental scale, experimental environment, and results compared. Finally, experimental analysis of seven state-of-art algorithms has been conducted on randomly generated workflow to understand the working of list-scheduling algorithms. The main purpose of this article is to give proper direction to new researchers who are willing to work in workflow scheduling in heterogeneous computing environments.
In a multifunctional radar performing searching and tracking operations, the maximum number of targets that can be managed is an important measure of performance. One way a radar can maximize tracking performance is t...
详细信息
In a multifunctional radar performing searching and tracking operations, the maximum number of targets that can be managed is an important measure of performance. One way a radar can maximize tracking performance is to optimize its dwell scheduling. The problem of designing efficient dwell scheduling algorithms for various tracking and searching scenarios with respect to various objective functions has been considered many times in the past and many solutions have been proposed. We consider the dwell scheduling problem for two different scenarios where the only objective is to maximize the number of dwells scheduled during a scheduling period. We formulate the problem as a distributed and a nondistributed bin packing problem and present optimal solutions using an integer programming formulation. Obtaining an optimal solution gives the limit of radar performance. We also present a more computationally friendly but less optimal solution using a greedy approach.
Time-sensitive networking is an emerging technology that enables deterministic and reliable transmission in bridged Ethernet networks. The enhancement for scheduled traffic defined in IEEE 802.1Qbv [1] allows to imple...
详细信息
Time-sensitive networking is an emerging technology that enables deterministic and reliable transmission in bridged Ethernet networks. The enhancement for scheduled traffic defined in IEEE 802.1Qbv [1] allows to implement time-aware shaping (TAS) which grants periodic slices for transmission to various priority queues of a bridge. TAS is an enabler for traffic scheduling, i.e., frame transmissions of periodic streams at senders and the TAS on intermediate bridges are configured such that these frames experience no loss and hardly any queuing delay. Thereby, deterministic bounds on delay and jitter can be guaranteed to such streams. However, the standard does not provide an algorithm to compute transmission schedules. Therefore, more than 100 research works [St & uuml;ber et al. (2023)] propose various algorithms for computing such schedules. Nevertheless, there are still many challenges to solve in this area. In this work, we implement eleven of these algorithms and compare their performance under various conditions with regard to schedule quality and runtime. It reveals that the performance of the algorithms varies a lot and points out their shortcomings. The set of problem instances for this study covers a wide range of parameters and is released to the public so that the performance of new algorithms can be easily compared to those in this study.
Field programmable gate arrays (FPGAs) are often used to accelerate multiple tasks simultaneously, working in a tightly coupled processor-coprocessor architecture. Recently, with the fast development of emerging memor...
详细信息
Field programmable gate arrays (FPGAs) are often used to accelerate multiple tasks simultaneously, working in a tightly coupled processor-coprocessor architecture. Recently, with the fast development of emerging memory technologies, multi-context FPGAs with high-density memories that support fast dynamic reconfiguration have become feasible. Compared with single-context FPGAs, multicontext FPGAs have a much higher on-chip configuration memory capacity but have not been thoroughly investigated to exploit their capabilities. In this paper, we investigate how to best utilize the capacity advantage of the multicontext FPGAs. We first propose a static placement strategy to place the requested hardware tasks with minimal area on the FPGA. We then optimize the running time of the static placement without sacrificing its solution quality. Along with the static placement, we propose collaborated online placement and scheduling strategies to manage the actual execution and reconfiguration of hardware tasks on a multicontext FPGA. Our experiments show that the static placement algorithm generates high quality placement solutions within a short time. Starting from the static placement solution, our collaborated online placer and scheduler schedules and places simultaneous acceleration tasks and reduces the acceleration task rejection rate significantly compared to a baseline design.
The problem of scheduling the robot inverse dynamics computation consisting of m computational modules to be executed on a multiprocessor system consisting of p identical homogeneous processors to achieve a minimum-sc...
详细信息
The problem of scheduling the robot inverse dynamics computation consisting of m computational modules to be executed on a multiprocessor system consisting of p identical homogeneous processors to achieve a minimum-schedule length is examined. This scheduling problem is known to be NP-complete. To achieve the minimum computation time, the Newton-Euler equations of motion are expressed in the homogeneous linear recurrence form that results in achieving maximum parallelism. To speed up the searching for a solution, a heuristic search algorithm called dynamical highest-level-first/most-immediate-successors-first (DHLF/MISF) is proposed to find a fast but suboptimal schedule. For an optimal schedule the minimum-schedule-length problem can be solved by a state-space search method, the A* algorithm coupled with an efficient heuristic function derived from the Fernandez and Bussell bound.
Many modern real-time parallel applications can be modeled as a directed acyclic graph (DAG) task. Recent studies show that the worst-case response time (WCRT) bound of a DAG task can be significantly reduced when the...
详细信息
Many modern real-time parallel applications can be modeled as a directed acyclic graph (DAG) task. Recent studies show that the worst-case response time (WCRT) bound of a DAG task can be significantly reduced when the execution order of the vertices is determined by the priority assigned to each vertex of the DAG. How to obtain the optimal vertex priority assignment, and how far from the best-known WCRT bound of a DAG task to the minimum WCRT bound are still open problems. In this article, we aim to construct the optimal vertex priority assignment and derive the minimum WCRT bound for the DAG task. We encode the priority assignment problem into an integer linear programming (ILP) formulation. To solve the ILP model efficiently, we do not involve all variables or constraints. Instead, we solve the ILP model iteratively, i.e., we initially solve the ILP model with only a few primary variables and constraints, and then at each iteration, we increment the ILP model with the variables and constraints which are more likely to derive the optimal priority assignment. Experimental work shows that our method is capable of solving the ILP model optimally without involving too many variables or constraints, e.g., for instances with 50 vertices, we find the optimal priority assignment by involving 12.67% variables on average and within several minutes on average.
暂无评论