In the context of least slack first scheduling, switching may frequently be caused. The extra overheads of preemptions among tasks debase the performance of soft real-time systems significantly. In this paper, we pres...
详细信息
ISBN:
(纸本)9781424414970;1424414970
In the context of least slack first scheduling, switching may frequently be caused. The extra overheads of preemptions among tasks debase the performance of soft real-time systems significantly. In this paper, we present a novel scheduling algorithm, named dynamic fuzzy threshold least slack first (DFTLSF) scheduling, which solved the switching problem when use least slack first scheduling algorithm in tasks. The notion of dynamic fuzzy threshold coefficient was defined to fuzzy the threshold dynamicly. The slack time of the running task is reduced to its fuzzy threshold to avoid thrashing. Comparing to the traditional least slack first scheduling algorithm, the simulation results show that, the dynamic fuzzy preemption make the switching number of the novel algorithm smaller and the missed deadline percentage decreased.
Real-time embedded systems have become widely used in many fields such as control, monitoring and aviation. They perform several tasks under strict time constraints. In such systems, deadline miss may lead to catastro...
详细信息
ISBN:
(纸本)9781509006809
Real-time embedded systems have become widely used in many fields such as control, monitoring and aviation. They perform several tasks under strict time constraints. In such systems, deadline miss may lead to catastrophic results so that all jobs need to be scheduled appropriately to ensure that they meet their deadline times. This paper presents an efficient dynamic scheduling algorithm during run-time to schedule periodic tasks in multiprocessor environments and uniprocessor as well using a dynamic average estimation. Dynamic average estimation refers to changing in different probability distributions when a task is added or removed from them. It is not always available a value of Worst-Case Execution Time (WCET) in many real-time applications such as multimedia where data has a great variation. The proposed approach selects which task or a set of tasks must be picked up for execution. A simulation system was developed to show validation of the proposed approach.
Integrated radar and communications (IRC) waveform can be applied to perform radar and communication tasks simultaneously in a multifunctional integration system (MFIS). In order to solve the task scheduling problems ...
详细信息
ISBN:
(数字)9798350355895
ISBN:
(纸本)9798350355901
Integrated radar and communications (IRC) waveform can be applied to perform radar and communication tasks simultaneously in a multifunctional integration system (MFIS). In order to solve the task scheduling problems of MFIS based on dynamic aperture partition antenna, a novel adaptive task scheduling algorithm based on IRC waveform is proposed in this paper. Firstly, the task scheduling optimization problem is established under the conditions of two-dimensional time and aperture resource constraints after establishing the multi-task models. Secondly, the proposed adaptive task scheduling algorithm makes use of the special IRC waveform characteristics of MFIS, function of time window of task and aperture resource to achieve the different kinds of tasks scheduling effectively. Finally, the simulation results show that the proposed algorithm increases the successful scheduling ratio and resource utilization ratio compared with the traditional multiple task parallel scheduling algorithms.
New cellular networks standardized by 3GPP as LTE and LTE-A aim to provide scalable resources optimizing both system performance and user satisfaction. In order to achieve this goal the scheduler algorithm becomes of ...
详细信息
New cellular networks standardized by 3GPP as LTE and LTE-A aim to provide scalable resources optimizing both system performance and user satisfaction. In order to achieve this goal the scheduler algorithm becomes of upmost importance. Thus a number of different resource allocation algorithms have been suggested. However, the uplink has been little covered due to the fact it poses more specific constraints and it is quite complicated to balance the trade-off between channel state information, system throughput and user perceived throughput. In this paper we propose a new resource allocation algorithm that tries to balance the advantages of two previously suggested ones. We also introduce a new parameter, the user ratio, which allows us to explicitly quantify the trade-off between fairness, system throughput and user throughput for different channel conditions.
In this work, we present a Genetic Algorithm (GA) based method for pipeline scheduling optimization. The objective is to minimize the circuit area under both data initiation interval and pipeline latency constraints. ...
详细信息
In this work, we present a Genetic Algorithm (GA) based method for pipeline scheduling optimization. The objective is to minimize the circuit area under both data initiation interval and pipeline latency constraints. In the initialization, the scheduler generates a series of solutions between As Soon As Possible (ASAP) and As Late As Possible (ALAP) interval. Afterwards a Linear Programming (LP) algorithm is applied for transforming unfeasible solutions to feasible solutions, which are input to GA for searching the optimization result. In the experiments, our proposed algorithm achieves an average of 29.74% area improvement by comparing with ASAP and ALAP methods.
Most of earlier grid scheduling algorithms were based on centralized scheduler. Relying on central scheduler yields not only central point of failure, also, it is not possible because of scalability and political issu...
详细信息
Most of earlier grid scheduling algorithms were based on centralized scheduler. Relying on central scheduler yields not only central point of failure, also, it is not possible because of scalability and political issues in present day gigantic grid systems. Hence, meta-schedulers came into limelight. However many authors recognizes limitations of hierarchical grid scheduling and proposed peer-to-peer (P2P) techniques, which have potential for grid scheduling. In this paper, new decentralized scheduling algorithm is proposed for P2P grid systems. In this method, an independent task sovereignly selects a most suitable grid node based on local information of immediate neighbors. A vital feature of this method is that it can schedule both computation intensive and communication intensive tasks to make grid system's workload balanced.
Implementation of various incentive-based demand response strategies has great potential to decrease peak load growth and customer electricity bill cost. Using advanced metering and automatic demand management makes i...
详细信息
Implementation of various incentive-based demand response strategies has great potential to decrease peak load growth and customer electricity bill cost. Using advanced metering and automatic demand management makes it possible to optimize energy consumption, to reduce grid loss, and to release generation capacities for the sake of providing sustainable electricity supply. Executing an incentive-based program is a simple way for customers to monitor and manage their energy consumption, and therefore, to reduce their electricity bill. With these objectives, this paper examines the previously suggested load scheduling programs and proposes a new practical one for residential energy management. The method is aimed at optimizing customers' bill cost and satisfaction by taking into consideration the generation capacity limitation and dynamic electricity price in different time slots of a day. Moreover, the proposed optimization algorithm is compared with Particle Swarm Optimization (PSO) algorithm to illustrate high efficiency of the proposed algorithm as a practical industrial tool for peak load shaving.
Wireless sensor networks (WSN) are designed for data gathering, processing and transmitting with particular requirements: low hardware complexity, low energy consumption, special traffic pattern support, scalability, ...
详细信息
Wireless sensor networks (WSN) are designed for data gathering, processing and transmitting with particular requirements: low hardware complexity, low energy consumption, special traffic pattern support, scalability, and in some cases, real-time operation. The emergence of wireless cyber physical systems leads to the need of real time scheduling of data packets. Developing packet scheduling algorithms in WSNs can efficiently enhance delivery of packets through wireless links. Packet scheduling is a process defined as selecting or rejecting a packet depending upon a decision. The packets are transmitted based on various algorithms within the network and there is a possibility of dropping the packets due to packet size, bandwidth, packet arrival rate, deadline of packet. To achieve real time delivery, the paths must deliver the data in time. Some of the algorithms have been selected for packet scheduling of real time data to achieve predictable and bounded end-to-end latencies while meeting the deadlines of queries, in which NJNC (Nearest Job Next-with combination) outperforms in mobility assisted data collection with combination of multiple requests served together in on-demand manner without starvation problem as in the case of existing schemes like first-come-first-serve (FCFS), shortest-job-next (SJN). The results shows that NJNC provides better performance than the nearest-job-next(NJN).
Unmanned Aerial Vehicle (UAV) has been widely applied in many domains. But the computation and energy resource limitation severely hinders its development and application. Mobile Edge Computing (MEC) emerges as a prom...
详细信息
ISBN:
(纸本)9781665435413
Unmanned Aerial Vehicle (UAV) has been widely applied in many domains. But the computation and energy resource limitation severely hinders its development and application. Mobile Edge Computing (MEC) emerges as a promising platform to process the tasks offloaded from the UAVs to effectively improve the Quality-of-Service (QoS). To this vision, it is first required that the edge servers must be deployed with the needed service to handle the offloaded task. Fortunately, by exploring cloud native computing technology, it is possible to deploy container-based microservice to MEC in a prompt way. In this case, it raises the task scheduling problem on whether to deploy a new service or to utilize an existing service to balance the overhead between data transmission and the microservice deployment (i.e., container image pulling) for overall task completion time minimization. In this paper, the problem is first formulated in Integer Linear Programming (ILP) form, and proved to be NP-hard. We further propose an incentive-based request scheduling algorithm. Experiments based on track-driven simulations show that the total completion time for all tasks is reduced by 21.37% compared to the state-of-the-art solution.
IT infrastructures are rapidly growing due to the increased demand for computing power used by applications. Furthermore, modern cloud data centers are hosting various advanced applications based on the user's nee...
详细信息
ISBN:
(数字)9781728180120
ISBN:
(纸本)9781728180137
IT infrastructures are rapidly growing due to the increased demand for computing power used by applications. Furthermore, modern cloud data centers are hosting various advanced applications based on the user's needs. The goal of this paper is to maximize the reliability of running workflow applications considering spot instances revocation without imposing fault-tolerance overhead. For this purpose, we use an Artificial Neural Network algorithm (ANN) to define a failure prediction module for the cloud spot instances. Indeed, we introduce a novel workflow scheduling algorithm, named Reliability Aware and modified HEFT (Heterogeneous Earliest Finish Time) for minimizing the makespan of a given workflow subject to a specified reliability of the application. To evaluate our ANN based prediction model, we have used a benchmarking data set. The results of the model training demonstrate that the prediction accuracy is about 96 percent. Our experiments illustrate that although makespan does not improve significantly in all experiments, an acceptable level of reliability is achieved.
暂无评论