With the rapid development of the Internet, the network structure becomes larger and more complicated and attacking methods are more sophisticated, too. To enhance network security, Network Security Situation Analysis...
详细信息
Virtualization provides significant benefits of system maintenance, load balancing, fault tolerance, and power-saving in clusters or data centers. It also enables dynamic reconfiguration of computing resource for appl...
详细信息
Virtualization provides significant benefits of system maintenance, load balancing, fault tolerance, and power-saving in clusters or data centers. It also enables dynamic reconfiguration of computing resource for application environment. However, current dynamic resource control approaches mainly focus on how to satisfy the application-level quality of service (QoS) when application workloads vary with time. They are driven by application performance, which is often specific to certain class of applications and also weakens the response capability of control systems. In this paper, a resource-use-status-driven resource reconfiguration scheme (RUSiC) is presented to automatically adapt to dynamic workload changes to meet the demands of application performance. According to the new characteristics introduced by system virtualization, the scheme is designed to be a two-layer resource reconfiguration model to exactly grasp the demands of applications. Based on real resource use status, the scheme adjusts proper resource configuration for applications in time. Furthermore, the scheme introduces power-saving into resource reconfiguration process and avoids considerable unnecessary power and cooling consumption by reducing the number of active physical nodes in a new resource configuration. Experiments demonstrate that the scheme can quickly detect and respond to shifting resource demands as application workloads change over time.
The policies and mechanisms of VCPU (virtual CPU) scheduling in a virtual machine system are key factors to determine the system performance. Because the architecture of the software stack in the virtual machine syste...
详细信息
The policies and mechanisms of VCPU (virtual CPU) scheduling in a virtual machine system are key factors to determine the system performance. Because the architecture of the software stack in the virtual machine system is different from the traditional computersystems, when scheduling the VCPUs in virtual machines, simply adopting scheduling strategies and algorithms of existing operating systems without any modifications can lead to drastic degradation of the system performance. Moreover, with the multi-core technology being employed for physical processors, the complexity of the VCPU scheduling is increased. Firstly, the architecture of the virtual machine system and its two-stage scheduling framework are depicted and analyzed in detail in this paper. Because the deterministic mapping relationship between application threads and physical cores is difficult to establish in the two-stage framework, and part functions of operating systems move down to virtual machine monitor, VCPU scheduling will confront many problems and challenges that mainly embody four aspects: the semantic gap between guest operating systems and a virtual machine monitor, the synchronization mechanisms in a multiprocessor operating system, the structure of shared cache in multi-core processors and emerging asymmetric multi-core structure. And then advantages and limitations of the existing solutions for these problems are discussed and analyzed deeply, and suggestions for further researches are presented.
Spike sorting is the essential step in analyzing recording spike signals for studying information processing mechanisms within the nervous system. Overlapping is one of the most serious problems in the spike sorting f...
详细信息
MapReduce provided a novel computing model for complex job decomposition and sub-tasks management to support cloud computing with large distributed data sets. However, its performance is significantly influenced by th...
详细信息
Cloud computing enables users to perform their computation tasks in the public virtualized cloud using a pay-as-you-go style. Current pay-as-you-go pricing schemes typically charge on the incurred virtual machine hour...
详细信息
Cloud computing enables users to perform their computation tasks in the public virtualized cloud using a pay-as-you-go style. Current pay-as-you-go pricing schemes typically charge on the incurred virtual machine hours. Our case studies demonstrate significant variations in the user costs, indicating significant unfairness among different users from the micro-economic perspective. Further studies reveal the reason for such variations is interference among concurrent virtual machines. The amount of interference cost depends on various factors, including workload characteristics, the number of concurrent VMs, and scheduling in the cloud. In this paper, we adopt the concept of pricing fairness from micro economics, and quantitatively analyze the impact of interference on the pricing fairness. To solve the unfairness caused by interference, we propose a pay-as-you-consume pricing scheme, which charges users according to their effective resource consumption excluding interference. The key idea behind the pay-as-you-consume pricing scheme is a machine learning based prediction model of the relative cost of interference. Our preliminary results with Xen demonstrate the accuracy of the prediction model, and the fairness of the pay-as-you-consume pricing scheme.
In data centers and cloud computing environments, the number of virtual machines (VMs) increases when the number of service requests increases. Since services are invoked on demand, the corresponding virtual machines ...
详细信息
In data centers and cloud computing environments, the number of virtual machines (VMs) increases when the number of service requests increases. Since services are invoked on demand, the corresponding virtual machines will be created and shut down frequently. This makes the time of starting a virtual machine a crucial performance bottleneck for services in data centers. Besides, if virtual machines read image files from disks to start themselves, additional overhead to access disk will be generated. In this paper, we present a cloud service cache system based on memory template of virtual machines - VCache to improve the response time of cloud computingservices, and to reduce the disk access overhead. This system can create and maintain service cache VMs through memory templates, which are snapshots of running virtual machines. By creating virtual machines from cached image files, the service running in these VMs can be deployed rapidly, which greatly reduces the launching time of the service and disk I/O load. We evaluate our system with experiments, and the experimental results show that the average time for creating a VM is reduced about 80% and the amount of data through disk access decreases more than 50%.
Cloud computing and Software-as-a-Service (SaaS) are being widely applied. In the near future, there will be many cloud services providing similar services. One of the issues is to dispatch service requests to availab...
详细信息
Rendering of volume caustics in participating media is often expensive, even with different acceleration approaches. Basic volume photon tracing is used to render such effects, but rather slow due to its massive quant...
详细信息
Word Sense Disambiguation (WSD) is one of the fundamental natural language processing tasks. However, lack of training corpora is a bottleneck to construct a high accurate all-words WSD system. Annotating a large-scal...
详细信息
暂无评论