Current cloudcomputing infrastructures and their management are highly centralized, and therefore they suffer from limitations in terms of network latency, energy consumption, and possible legal restrictions. Decentr...
详细信息
ISBN:
(纸本)9781538664889
Current cloudcomputing infrastructures and their management are highly centralized, and therefore they suffer from limitations in terms of network latency, energy consumption, and possible legal restrictions. Decentralizing the cloud has been recently proposed as an alternative. However, the efficient management of a geographically dispersed platform brings new challenges related to service localization, network utilization and locality-awareness. We here consider a cloud topology composed of many small datacenters geographically dispersed within the backbone network. In this paper, we present the design, development and experimental validation of Koala, a novel overlay network that specifically targets such a geographically distributedcloud platform. The three key characteristics of Koala are laziness, latency-awareness and topology-awareness. By using application traffic, Koala maintains the overlay lazily while it takes locality into account in each routing decision. Although Koala's performance depends on application traffic, through simulation experiments we show that for a uniformly distributed traffic, Koala delivers similar routing complexity and reduced latency compared to a traditional proactive protocol, such as Chord. Additionally, we show that despite its passive maintenance, Koala can appropriately deal with churn by keeping the number of routing failures low, without significantly degrading the routing performance. Finally, we show how such an overlay adapts to a decentralized cloud composed of multiple small datacenters.
In this paper a novel parallel network traffic control mechanism for cloudcomputing is proposed based on the packet scheduler HTB (Hierarchical Token Buckets). The idea of bandwidth borrowing in HTB makes it suitable...
详细信息
cloudcomputing is an integrated infrastructure for resource sharing andcomputing in distributed environment. In this paper, we propose an ontology management approach in cloudcomputing to provide a unique semantic ...
详细信息
cloudcomputing is an emerging computing paradigm. It aims to share data, caluations, and services transparently among users of a massive grid. It became a hot issue for its adavantage such as "reduce costs"...
详细信息
Efficient energy supply and consumption play a substantial role in the energy grid, especially with renewable energy sources. Renewable power sources are unreliable which made the grid difficult to handle. A smart ene...
详细信息
Developing local public sports services is a crucial part of urban public services. It directly affects the quality of life of the people and plays an essential role in the development of sports construction. However,...
详细信息
The presence of soft faulty sensor nodes in wireless sensor networks (WSNs) creates problem for maintaining the consistent performance over the entire life span. For this, a distributed fault detection algorithm is pr...
详细信息
Many of the current, in-use simulation systems today are difficult to install, configure, and execute. cloudcomputing has received a significant amount of attention leading to large information technology (IT) initia...
详细信息
ISBN:
(纸本)9781315685892;9781138028005
Many of the current, in-use simulation systems today are difficult to install, configure, and execute. cloudcomputing has received a significant amount of attention leading to large information technology (IT) initiatives at the corporate and government levels. It is natural to explore the cloudcomputing with the modeling and simulation (M&S) community. The paper puts forward a distributed simulation framework use of cloudcomputing.
Virtualization, as a technology that enables easy and effective resource sharing with a low cost and energy footprint, is becoming increasingly popular not only in enterprises but also in high performance computing. A...
详细信息
ISBN:
(纸本)9781467329255;9781467329224
Virtualization, as a technology that enables easy and effective resource sharing with a low cost and energy footprint, is becoming increasingly popular not only in enterprises but also in high performance computing. Applications with stringent performance needs often make use of graphics processors for accelerating their computations. Hence virtualization solutions that support GPU acceleration are gaining importance. This paper performs a detailed evaluation of three frameworks: rCUDA, gVirtuS and Xen, which support GPU acceleration through CUDA, within a virtual machine. We describe the architectures of these three solutions and compare and contrast them in terms of their fidelity, performance, multiplexing and interposition characteristics.
In this paper we investigate Monte Carlo optimisation of the fitness function on a multi-GPU cluster. Our main goal is to develop auto-tuning techniques for the GPU cluster. Monte Carlo or random sampling is a techniq...
详细信息
ISBN:
(纸本)9781467329255
In this paper we investigate Monte Carlo optimisation of the fitness function on a multi-GPU cluster. Our main goal is to develop auto-tuning techniques for the GPU cluster. Monte Carlo or random sampling is a technique to optimise a fitness function by giving random values to function parameters. When execution of the fitness function requires a high amount of computational power Monte Carlo sampling becomes both very time and computational power consuming. A developer who is not familiar with the application, hardware, and the CUDA runtime cannot determine the optimal execution parameters. This makes GPU auto-tuning well suited to achieving better performance and reducing computing time. Finally, we compare the execution time with the consequent CPU as well as with multi-core CPU implementation.
暂无评论