In traditional distributed computing systems a few user types are found having rather "flat" profiles, mainly due to the same administrative domain the users belong to. This is quite different in Computation...
详细信息
In traditional distributed computing systems a few user types are found having rather "flat" profiles, mainly due to the same administrative domain the users belong to. This is quite different in Computational Grids (CGs) in which several user types should co-exist and make use of resources according to hierarchical nature and the presence of the multiple administrative domains. One implication of the existence of different hierarchical levels in CGs is that it imposes different access and usage policies on resources. In this paper we firstly highlight the most common Grid users types and their relationships and access scenarios in CGs corresponded to old (e.g. performance) and new (e.g security) requirements. Then, we identify and analyze new features arising in users' behavior in Grid scheduling, such as dynamic, selfish, cooperative, trustful, symmetric and asymmetric behaviors. We discuss also how computational economy-based approaches, such as market mechanisms, and computational paradigms, such as Neural Networks, can be used to model user requirements and predict users' behaviors in CGs. As a result of this study we have provided a comprehensive analysis of Grid user scenarios than can serve as a basis for application designers in CGs.
This paper suggests a hybrid resourcemanagement approach for efficient paralleldistributed computing on the Grid. It operates on both application and system levels, combining user-level job scheduling with dynamic w...
详细信息
This paper suggests a hybrid resourcemanagement approach for efficient paralleldistributed computing on the Grid. It operates on both application and system levels, combining user-level job scheduling with dynamic workload balancing algorithm that automatically adapts a parallel application to the heterogeneous resources, based on the actual resource parameters and estimated requirements of the application. The hybrid environment and the algorithm for automated load balancing are described, the influence of resource heterogeneity level is measured, and the speedup achieved with this technique is demonstrated for different types of applications and resources. (c) 2008 Elsevier B.V. All rights reserved.
The advent of commodity-based high-performance clusters has raised parallel and distributed computing to a new level. However, in order to achieve the best possible performance improvements for large-scale computing p...
详细信息
The advent of commodity-based high-performance clusters has raised parallel and distributed computing to a new level. However, in order to achieve the best possible performance improvements for large-scale computing problems as well as good resource utilization, efficient resourcemanagement and scheduling is required. This paper proposes a new two-level adaptive space-sharing scheduling policy for non-dedicated heterogeneous commodity-based high-performance clusters. Using trace-driven simulation, the performance of the proposed scheduling policy is compared with existing adaptive space-sharing policies. Results of the simulation show that the proposed policy performs substantially better than the existing policies. (C) 2006 Elsevier B.V. All rights reserved.
The next step in the evolution of UMTS is the Enhanced Uplink or high speed uplink packet access (HSUPA), which is designed for the efficient transport of packet switched data. We propose an analytic modeling approach...
详细信息
The next step in the evolution of UMTS is the Enhanced Uplink or high speed uplink packet access (HSUPA), which is designed for the efficient transport of packet switched data. We propose an analytic modeling approach for the performance evaluation of the UMTS uplink with best-effort users over the enhanced uplink and QoS-users over dedicated channels. The model considers two different scheduling disciplines for the enhanced uplink: parallelscheduling and one-by-one scheduling. resourcemanagement in such a system has to consider the requirements of the dedicated channel users and the enhanced uplink users on the shared resource, i.e. the cell load. We evaluate the impact of two resourcemanagement strategies, one with preemption for dedicated channels and one without, on key QoS-indicators like blocking and dropping probabilities as well as user and cell throughput.
In large distributedsystems, where shared resources are owned by distinct entities, there is a need to reflect resource ownership in resource allocation. An appropriate resourcemanagement system should guarantee tha...
详细信息
ISBN:
(纸本)9781424449231
In large distributedsystems, where shared resources are owned by distinct entities, there is a need to reflect resource ownership in resource allocation. An appropriate resourcemanagement system should guarantee that resource's owners have access to a share of resources proportional to the share they provide. In order to achieve that some policies can be used for revoking access to resources currently used by other users. In this paper, a scheduling policy based in the concept of distributed ownership is introduced called Owner Share Enforcement Policy (OSEP). OSEP goal is to guarantee that owner do not have their jobs postponed for longer periods of time. We evaluate the results achieved with the application of this policy using metrics that describe policy violation, loss of capacity, policy cost and user satisfaction in environments with and without job checkpointing. We also evaluate and compare the OSEP policy with the Fair-Share policy, and from these results it is possible to capture the trade-offs from different ways to achieve fairness based on the user satisfaction.
The proceedings contain 15 papers. The topics discussed include: dynamic resource-critical workflow scheduling in heterogeneous environments;decentralized grid scheduling with evolutionary fuzzy systems;analyzing the ...
ISBN:
(纸本)3642046320
The proceedings contain 15 papers. The topics discussed include: dynamic resource-critical workflow scheduling in heterogeneous environments;decentralized grid scheduling with evolutionary fuzzy systems;analyzing the EGEE production grid workload: application to jobs submission optimization;the resource usage aware backfilling;the gain of overbooking;modeling parallel system workloads with temporal locality;scheduling restartable jobs with short test runs;effects of topology-aware allocation policies on scheduling performance;contention-aware scheduling with task duplication;job admission and resource allocation in distributed streaming systems;scalability analysis of job scheduling using virtual nodes;competitive two-level adaptive scheduling using resource augmentation;and job scheduling with lookahead group matchmaking for time/space sharing on multi-core parallel machines.
Multi-core processors have changed the conventional hardware structure and require a rethinking of system scheduling and resourcemanagement to utilize them efficiently. However, current multi-core systems are still u...
详细信息
ISBN:
(纸本)9781424437511
Multi-core processors have changed the conventional hardware structure and require a rethinking of system scheduling and resourcemanagement to utilize them efficiently. However, current multi-core systems are still using conventional single-core memory scheduling. In this study, we investigate and evaluate traditional memory access scheduling techniques, and propose a core-aware memory scheduling for multi-core environments. Since memory requests from the same source exhibit better locality, it is reasonable to schedule the requests by taking the source of the requests into consideration. Motivated from this principle of locality, we propose two core-aware policies based on traditional bank-first and row-first schemes. Simulation results show that the core-aware policies can effectively improve the performance. Compared with the bank-first and row-first policies, the proposed core-aware policies reduce the execution time of certain NAS parallel Benchmarks by zip to 20% in running the benchmarks separately, and by 11% in running them concurrently.
This paper describes a new and novel scheme for job admission and resource allocation employed by the SODA scheduler in System, S. Capable of processing enormous quantities of streaming data, System S is a large-scale...
详细信息
ISBN:
(纸本)9783642046322
This paper describes a new and novel scheme for job admission and resource allocation employed by the SODA scheduler in System, S. Capable of processing enormous quantities of streaming data, System S is a large-scale, distributed stream processing system designed to handle complex applications. Tire problem of scheduling in distributed, stream-based systems is quite unlike that in more traditional systems. And the requirements for System S, in particular, are more stringent than one might expect even in a "standard" stream-based design. For example, in System S, the offered load is expected to vastly exceed system capacity. So a careful job admission scheme is essential. The jobs ill System. S are essentially directed graphs, with software "processing elements" (PEs) as vertices and data streams as edges connecting the PEs. The jobs themselves are often heavily interconnected. Thus resource allocation of individual PEs must be done carefully in order to balance the flow. We describe the design of the SODA scheduler, with particular emphasis on fire component, known as macroQ, which performs the job admission and resource allocation tasks. We demonstrate by experiments the natural trade-offs between job admission and resource allocation.
The master/worker (MW) paradigm can be used to implement parallel discrete event simulations (PDES) on metacomputing systems. MW PDES applications incur overheads not found in conventional PDES executions executing on...
详细信息
ISBN:
(纸本)9780769537139
The master/worker (MW) paradigm can be used to implement parallel discrete event simulations (PDES) on metacomputing systems. MW PDES applications incur overheads not found in conventional PDES executions executing on tightly coupled machines. We introduce four techniques for reducing these overheads on public resource and desktop grid infrastructures Work unit caching, pipelined state updates, expedited message delivery, and adaptive work unit scheduling mechanisms are described that provide significant reduction in overall overhead when used in tandem. We present performance results showing that an optimized MW PDES system can exhibit performance comparable to a traditional PDES system for a queueing network and a particle physics simulation.
According to characteristics of Simulation Grid resources(SGR), an extend Web Service Description Language (WSDL) was adopted to describe the attributes of SGRs, in order to facilitate the application of machine learn...
详细信息
暂无评论