Taking the locomotive running time between goods operation sites as weights, wagons'placing-in and taking-out problems can be regarded as a single machine scheduling problem 1| pij | Cij, which can be transformed ...
详细信息
Graph coloring is the task of assigning colors or labels to elements of a graph (edges or vertices) subject to some constraints. Time table scheduling requires efficient allocation of resources in a way that no confli...
详细信息
This research proposes an algorithmic cache arrangement scheme to efficiently utilize existing hardware that are currently plagued with memory wall problem. The proposed scheme exploits straightforwardness of First-in...
详细信息
We consider the joint upstreaming of live and on-demand user-generated video content over LTE using a Quality-of-Experience driven approach. We contribute to the state-of-the-art work on multimedia scheduling in three...
详细信息
The problem of allocating jobs to a set of parallel unrelated machines in a make to stock manufacturing system is studied. The items are subdivided into families of similar products. Sequence-dependent setups arise wh...
详细信息
Cloud Computing is a type of parallel and distributed system consisting of a collection of interconnected and virtualized computers that are dynamically provisioned and presented as one or more unified computing resou...
详细信息
Cloud Computing is a type of parallel and distributed system consisting of a collection of interconnected and virtualized computers that are dynamically provisioned and presented as one or more unified computing resources between the service provider and consumers. In cloud computing, many tasks are to be executed by the available services to achieve better performance, minimum total time for completion, shortest response time, utilization of resources etc. Because of these different intentions, we need to propose a scheduling algorithm to perform appropriate allocation map of tasks on resources. In existing system task scheduling algorithm have been designed based on priority and total completion time in cloud computing. The task scheduling algorithm first computes the priority of the tasks based on the inputs of the users and then sorts the tasks by priority. Second, this algorithm calculates the minimum completion time of all the tasks on different resources and schedules onto a resources accordingly. The drawbacks in existing system are, it does not effectively use the idle resources. In this paper we proposed a dynamic scheduling algorithm that efficiently uses the idle time of resources from monitoring the task timing information on resources. The multi-dimensional cost matrix table is developed based on execution time, CPU usage of each tasks and current CPU usage of resources and also we have extended the deadline time value using min-max policies to complete the tasks within a earlier time period. In this paper, we have considered deadline, idle time and reliability as QoS parameters for scheduling.
Recently, researchers have applied semi-partitioned approaches to improve performance of hard real-time scheduling algorithms in multiprocessor architectures. RMLS is one of these methods. However, advantages of using...
详细信息
Recently, researchers have applied semi-partitioned approaches to improve performance of hard real-time scheduling algorithms in multiprocessor architectures. RMLS is one of these methods. However, advantages of using semi-partitioned methods are often limited by well-known scheduling algorithms such as RM and EDF, which the former is simple but inefficient and the latter is efficient but has high processing overhead. There is an intelligent algorithm working on uniprocessor named IRM which takes advantages of both RM and EDF algorithms using that we present a new method called intelligent rate-monotonic least splitting to improve RMLS. Experimental results show that the proposed algorithm outperforms many other algorithms in literature in terms of processor utilization.
In this paper, we explore how a natural generalization of Shortest Remaining Processing Time (SRPT) can be a powerful meta-algorithm for online scheduling. The meta-algorithm processes jobs to maximally reduce the obj...
详细信息
Cloud computing is a new computing paradigm that lets users access services over the internet. Cloud provides scalable, on-demand resources and highly accessible. Cloud charges its customers for only the usage. Workfl...
详细信息
ISBN:
(数字)9781728185248
ISBN:
(纸本)9781728185255
Cloud computing is a new computing paradigm that lets users access services over the internet. Cloud provides scalable, on-demand resources and highly accessible. Cloud charges its customers for only the usage. Workflow applications are used in many business processing, scientific fields and in many other domains. Cloud has become one of the optimum solutions for executing workflows as for the computing power and the benefits it offers. Workflow scheduling can reduce the overall cost of execution and optimize resource utilization in the cloud for both the cloud consumer and the service provider. In this paper, We compare our novel algorithm Total Resource Execution Time Aware scheduling Algorithm (TRETA) with existing heuristics which considers the total execution time of the computing resource as a factor for finding an optimal schedule. To the best of our knowledge none of the previous work has not considered the above metric for finding a better schedule. We compare the proposed algorithm with state-of-art heuristics First Come First (FCFS), Maximum Completion Time (MCT), Maximum Execution Time (MET), MaxMin, MinMin and Distributed Heterogeneous Earliest Finish Time (DHEFT) in the heterogeneous computing environment. The proposed algorithm is compared with other heuristics for the Makespan, Throughput and Degree of Imbalance. The experimentation is done for real workload traces of the CyberShake workflow of different task sizes generated from the Pegasus workflow management system using WorkflowSim. The proposed algorithm gives a better makespan and a better throughput like the other heuristics and results in a better Degree of Imbalance than the other heuristics.
We consider the problem of scheduling packets of different lengths via k directed parallel communication links. The links are prone to simultaneous errors --- if an error occurs, all links are affected. Dynamic packet...
详细信息
We consider the problem of scheduling packets of different lengths via k directed parallel communication links. The links are prone to simultaneous errors --- if an error occurs, all links are affected. Dynamic packet arrivals and errors are modelled by a worst-case adversary. The goal is to optimize competitive throughput of online scheduling algorithms. Two types of failures are considered: jamming, when currently scheduled packets are simply not delivered, and crashes, when additionally the channel scheduler crashes losing its current state. For the former, milder type of failures, we prove an upper bound on competitive throughput of 3/4 - 1/(4k) for odd values of k, and 3/4 - 1/(4k+4) for even values of k. On constructive side, we design an online algorithm that, for packets of two different lengths, matches the upper bound on competitive throughput. To compare, scheduling on independent channels, that is, when adversary could cause errors on each channel independently, reaches throughput of 1/2. This shows that scheduling under simultaneous jamming is provably more efficient than scheduling under channel-independent jamming. In the setting with crash failures we prove a general upper bound for competitive throughput of (√5-1)/2 and design an algorithm achieving it for packets of two different lengths. This result has two interesting implications. First, simultaneous crashes are significantly stronger than simultaneous jamming. Second, due to the above mentioned upper bound of 1/2 on throughput under channel-independenterrors, scheduling under simultaneous crashes is significantly stronger than channel-independent crashes, similarly as in the case of jamming errors.
暂无评论