A parallel optimisation technique for large join queries is presented. The technique processes the search space of query execution plans twice: the first scan is based on iterative improvement;the second scan uses the...
详细信息
From the perspective of the characteristics of the elevator group control system, this paper selects the shortest distance algorithm as a scheduling strategy, and constructs elevator running model. On this basis, this...
详细信息
Ever process scheduling in single and multi processors systems is one of the most focused research problem. In this paper we propose an approach for process scheduling based on back tracking technique. This approach c...
详细信息
Ever process scheduling in single and multi processors systems is one of the most focused research problem. In this paper we propose an approach for process scheduling based on back tracking technique. This approach considered TFT (Total Finish Time) as a main parameter. We limited load of each processor that not greater than ideal measure. Proposed approach always is resulted to optimized solution. Simulation shows that its results are better than LPT (Longest processing time), SPT (Shortest processing time) and PSO (Particle swarm optimization) algorithms.
In Grid computing environment application scheduling is very crucial because the resources are more heterogeneous, geographically distributed, complex and owned by different organizations, they are more prone to failu...
详细信息
ISBN:
(纸本)9781509016679
In Grid computing environment application scheduling is very crucial because the resources are more heterogeneous, geographically distributed, complex and owned by different organizations, they are more prone to failures. Generally, during application/job scheduling only performance factor of resources are considered. But if a node with high computational power also have high failure rate, then there is no such benefit of allocating task to that node because every time a failure occurs it needs recovery and in turn costs in term of time. Thus, failure increases make-span for the job and decreases system/node performance. So, it would be a great idea if we take into consideration failure rate and computational capacity of resources during scheduling. In this paper, to improve the system performance we have proposed a failure-aware scheduling algorithm by taking into consideration both performance and failure factors.
MapReduce is currently the most mainstream parallel computation model to deal with large-scale datasets, and as for a crucial module of MapReduce, the task scheduling has important research meanings. However, there ar...
详细信息
MapReduce is currently the most mainstream parallel computation model to deal with large-scale datasets, and as for a crucial module of MapReduce, the task scheduling has important research meanings. However, there are two mainly problems with existing delay scheduling algorithm: (1) the theoretical assumption of that all tasks are short tasks is limitary, and when nodes process tasks of different lengths, performance of this algorithm will decline, (2) all tasks are based on permanent waiting time and that cannot meet the needs of different users. In order to solve these two problems, this paper comes up with The Dynamic Delay scheduling Algorithm Based on Task Classification (TCDDS). The TCDDS algorithm divides all tasks into different categories by using fuzzy mathematics and gives different waiting time to different categories tasks, thus the response time of the whole job will be reduced and the performance of this algorithm will be improved.
With the increasing demand for high performance and low power consumption in embedded real-time systems, performance asymmetric multiprocessor architecture is beginning to be applied to embedded real-time systems, bec...
详细信息
ISBN:
(数字)9798350381993
ISBN:
(纸本)9798350382006
With the increasing demand for high performance and low power consumption in embedded real-time systems, performance asymmetric multiprocessor architecture is beginning to be applied to embedded real-time systems, because this heterogeneous architecture allows the system to allocate computational resources as needed to satisfy the demands of each application and dynamic workloads. Most of the previous studies on EDZL scheduling algorithms have focused on task prioritization, and they simply assign the highest priority tasks to the fastest processors. In this paper, we choose to assign the highest priority tasks on the slowest processor and propose SSFEDZL (slowest speed fit for earliest deadline first until zero-laxity) scheduling algorithm. We compare SSF-EDZL scheduling algorithm with other scheduling algorithms and the experimental results show that SSF-EDZL is a better scheduling algorithm which has higher processor utilization and better scheduling performance.
Cloud is a new trend in the computing. It is the combination of distributed, parallel and grid computing. It is a pay on demand computing. Basically cloud is a shared pool of resources. The resources are shared among ...
详细信息
Cloud is a new trend in the computing. It is the combination of distributed, parallel and grid computing. It is a pay on demand computing. Basically cloud is a shared pool of resources. The resources are shared among users who are geographically distributed. As the area of cloud is increasing, several issues are also increasing. These issues are related to security, resource allocation, performance and cost. The paper has its main focus on the resource allocation problem and to solve the problem we are discussing a task allocation algorithm. The algorithm that we will discuss in this paper uses the two task scheduling algorithm one is priority based and other is earliest deadline first scheduling algorithm. The tasks are allocated on the basis of their priority and the task having higher priority get scheduled first. The simulation is performed on the CloudSim toolkit. CloudSim is a JAVA based simulator. The proposed work will results in higher performance and also improves memory utilization.
In today's world of large distributed systems, the need for energy efficiency of individual components is complemented by the need for energy awareness of the complete system. Hence, energy-aware scheduling of tas...
详细信息
In today's world of large distributed systems, the need for energy efficiency of individual components is complemented by the need for energy awareness of the complete system. Hence, energy-aware scheduling of tasks on systems has become very important. Our work addresses the problem of finding an energy-aware schedule for a given system which also satisfies the precedence constraints between tasks to be performed by the system. We present a method which uses cellular automata to find a near-optimal schedule for the system. The rules for cellular automata are learned using a genetic algorithm. Though the work presented in this paper is not limited to scheduling in computing environments only, the work is validated with a sample simulation on distributed computing systems, and tested with some standard program graphs.
Cloud computing is now trending and more popular in these days for the computation and adopted by many companies like Google, Amazon, Microsoft etc., As the cloud size increases with increase in number of data center ...
详细信息
ISBN:
(纸本)9781509020850
Cloud computing is now trending and more popular in these days for the computation and adopted by many companies like Google, Amazon, Microsoft etc., As the cloud size increases with increase in number of data center power consumption over a data center increases. As number of request over the data center increase with increase in load and power consumption of the data center. So the requests need to be balanced in such manner which having more effective strategy for utilization of resources and reduction in power consumption. Request balancing in such manner without having knowledge of load over server maximize resource utilization but also increasing power consumption at server. So to overcome these issues in cloud Infrastructure as a service (IaaS), we have proposing a trust based scheduling algorithm using ant colony to minimize the load, OoS of the system. Proposed algorithm has proven to have better performance in term of load and reduced request failure as compared to previously proposed scheduling balancing algorithm for cloud IaaS.
In this paper we propose an optimal deployment and distributed packet scheduling of multi-sink Wireless Sensors networks (WNSs). This work is devoted to computing the optimal deployment of sinks for a given maximum nu...
详细信息
In this paper we propose an optimal deployment and distributed packet scheduling of multi-sink Wireless Sensors networks (WNSs). This work is devoted to computing the optimal deployment of sinks for a given maximum number of hops between nodes and sinks. We also propose an optimal distributed packet scheduling in order to estimate the minimum energy consumption. We consider the energy consumed due to reporting, forwarding and overhearing. In contrast to reporting and forwarding, the energy used in overhearing is difficult to estimate because it is dependent on the packet scheduling. In this case, we determine the lower-bound of overhearing, based on an optimal distributed packet scheduling formulation. We also propose another estimation of the lower-bound in order to simulate non interfering parallel transmissions which is more tractable in large networks. We note that overhearing largely predominates in energy consumption. A large part of the optimizations and computations carried out in this paper are obtained using ILP formalization.
暂无评论