Cloud computing is a hot topic in both industrial and academic areas. Virtualization employed on large data centers forms the basis of cloud computing, which includes CPU, I/O and memory virtualization. Time-sharing o...
详细信息
ISBN:
(纸本)9781467309745
Cloud computing is a hot topic in both industrial and academic areas. Virtualization employed on large data centers forms the basis of cloud computing, which includes CPU, I/O and memory virtualization. Time-sharing of CPU cycles for multiple virtual machines (VMs) has been the main bottleneck of system-level virtualization. How to schedule CPU cycles for multi-VMs to improve the QoS of web applications need further study. This paper first proposes a CPU management architecture for multi-VMs. Then, we convert the CPU scheduling problem into an integer programming problem. Importantly, we put forward a CPU scheduling algorithm based on utility optimization theory (UOCRS) to increase the global utility. Experiments show that our scheme improves the performance of Web applications remarkably.
Data Stream Processing or stream computing is the new computing paradigm for processing a massive amount of streaming data in real-time without storing them in secondary storage. In this paper we propose an integrated...
详细信息
ISBN:
(纸本)9781467309745
Data Stream Processing or stream computing is the new computing paradigm for processing a massive amount of streaming data in real-time without storing them in secondary storage. In this paper we propose an integrated execution platform for Data Stream Processing and Hadoop with dynamic load balancing mechanism to realize an efficient operation of computer systems and reduction of latency of Data Stream Processing. Our implementation is built on top of System S, a distributed data stream processing system developed by IBM Research. Our experimental results show that our load balancing mechanism could increase CPU usage from 47.77% to 72.14% when compared to the one with no load balancing. Moreover, the result shows that latency for stream processing jobs are kept low even in a bursty situation by dynamically allocating more compute resources to stream processing jobs.
Summary form only given, as follows. As we look to exascale systems and a new generation of computing hardware begins to take shape, new software challenges have also emerged. It is therefore an exciting year for comp...
Summary form only given, as follows. As we look to exascale systems and a new generation of computing hardware begins to take shape, new software challenges have also emerged. It is therefore an exciting year for computer scientists. We must not fear the challenges ahead, but be must be willing to break the rules to achieve our exascale goals. Node architectures are rapidly changing. Every hardware company is looking for ways to squeeze out more performance per Watt. System architects are also working on ways to integrate fast networking and memory, increase parallelism, and manage heterogeneous computing elements. Building special-purpose exascale systems from this new technology will fundamentally change many parts of our system software stack. While it may be years before disruptive and emerging technology paths become clear and architectures converge on fundamental design patterns, there are many exciting areas of advanced research that can be addressed today. Other areas are yet to be explored. This presentation will focus on the areas of system software, that code that sits between the application and the hardware, that must either evolve or be reinvented to reach our computing goals.
The emergence of cloud computing services has created an additional dimension in the decision for creating a new business plan which requires computing resources. Although quite a few of the recent published analytica...
详细信息
The emergence of cloud computing services has created an additional dimension in the decision for creating a new business plan which requires computing resources. Although quite a few of the recent published analytical models have provided some very good ground work for assessing the cost difference between in-house systems and leasing cloud services for the same set of computing requirements, there is a need for research towards providing a company with risk analysis tools that will allow business managers to assess the viability of migrating business applications to the cloud with broader considerations. This article consolidates several genres of analysis on cloud computing risk. We derive an new analytical model that takes into account business economics, and other business requirements such as various considerations on security and availability. This model applies these considerations to the already well researched buy-or-lease models currently used by business management professionals which allows for an easy transition to this more thorough model. We made a comparison research through evaluating the service requirement situation from published research against two service providers. The results of this research demonstrates the veracity of our model. The rationale for this research is to provide a more complete and robust model that encompasses the entire spectrum of considerations yet holds a high degree of precision.
This paper presents an on-going experience in early introduction to parallelism in the computing Engineering Degree. Four courses of the second year and a computing centre participate in the experience. The courses ar...
详细信息
This paper presents an on-going experience in early introduction to parallelism in the computing Engineering Degree. Four courses of the second year and a computing centre participate in the experience. The courses are given by three departments. Students are introduced to parallelism for the first time in the second year, and with our experience we aim to approach different topics of parallelism in a coordinated and practical way.
Real-valued black-box optimization of badly behaved and not well understood functions is a wide topic in many scientific areas. Possible applications range from maximizing portfolio profits in financial mathematics ov...
详细信息
Real-valued black-box optimization of badly behaved and not well understood functions is a wide topic in many scientific areas. Possible applications range from maximizing portfolio profits in financial mathematics over efficient training of neuronal networks in computational linguistics to parameter identification of metabolism models in industrial biotechnology. This paper presents a comparison of several global as well as local optimization strategies applied to the task of efficiently identifying free parameters of a metabolic network model. A focus is being set on the ease of adopting these strategies to modern, highly parallel architectures. Finally an outlook on the possible parallel performance is being presented.
The exponential growth in user and application data entails new means for providing fault tolerance and protection against data loss. High Performance computing (HPC) storage systems, which are at the forefront of han...
详细信息
Source code compiling is a non-trivial task that requires many computing resources. As a software project grows, its build time increases and debugging on a single computer becomes more and more time consuming task. A...
详细信息
ISBN:
(纸本)9781467309745
Source code compiling is a non-trivial task that requires many computing resources. As a software project grows, its build time increases and debugging on a single computer becomes more and more time consuming task. An obvious solution would be a dedicated cluster acting as a build farm, where developers can send their requests. But in most cases, this solution has a very low utilization of available computing resources which makes it very ineffective. Therefore, we have focused on non-dedicated clusters to perform distributed compilation, where we could use users' computers as nodes of a build farm. We compare two different approaches: distcc, which is an open-source program to distribute compilation of C/C++ code between several computers on a network and Clondike, which is a universal peer-to-peer cluster that is being developed at the Czech Technical University in Prague. A very complex task able to test deeply both systems is a compilation of a Linux Kernel with many config options. We have run this task on a cluster with up to 20 computers and have measured computing times and CPU loads. In this paper, we will present the results of this experiment that indicate the scalability and utilization of given resources in both systems. We also discuss the penalty of a generic solution over a task-specific one.
Cloud computing allows Web application owners to host their applications with low operational cost and enables them to scale applications to maintain performance based on traffic load and application resource requirem...
详细信息
ISBN:
(纸本)9781467309745
Cloud computing allows Web application owners to host their applications with low operational cost and enables them to scale applications to maintain performance based on traffic load and application resource requirements. However, for multi-tier Web applications, it is difficult to automatically identify the exact location of a bottleneck and scale the appropriate resource tier accordingly because multi-tier applications are complex and bottleneck patterns may be dependent on the specific pattern of workload at any given time. This Ph.D. dissertation aims to explore the possibilities to satisfy response time guarantees for multi-tier applications hosted on clouds using adaptive resource management with minimal hardware profiling and application-centric knowledge.
暂无评论