The Quiet DDoS attack becomes one of the most severely threat to the network safety, because this kind of attack completely adopts legal TCP flow while distributing its destination IP to evade various countermeasu...
详细信息
The Quiet DDoS attack becomes one of the most severely threat to the network safety, because this kind of attack completely adopts legal TCP flow while distributing its destination IP to evade various countermeasures deployed in the network. However, the high distributed degree of the destination IP becomes one characteristics of the attack. However, we think this characteristic make partially of the attack flow not match the behavior habit of network users. Inspired by this viewpoint, we propose a novel method to counter the Quiet DDoS attack based on the NBHU (network behavior habit of users). Furthermore, we carry on simulation of our method using NS2 platform, and the results show that this method can reduce the attack performance.
Strongly promoted by the leading industrial companies, cloud computing becomes increasingly popular in re-cent years. The growth rate of cloud computing surpasses even the most optimistic predictions. A cloud applicat...
详细信息
Strongly promoted by the leading industrial companies, cloud computing becomes increasingly popular in re-cent years. The growth rate of cloud computing surpasses even the most optimistic predictions. A cloud application is a large-scale distributed system that consist a lot of distributed cloud nodes. How to make optimal deployment of cloud applications is a challenging research problem. When deploying a cloud application to the cloud environment, cloud node ranking is one of the most important approaches for selecting optimal cloud nodes for the cloud application. Traditional ranking methods usually rank the cloud nodes based on their QoS values, without considering the communication performance between cloud nodes. However, such kind of node relationship is very important for the communication-intensive cloud applications (e.g., Message Passing Interface (MPI) programs), which have a lot of communications between the selected cloud nodes. In this paper, we propose a novel clustering-based method for selecting optimal cloud nodes for deploying communication-intensive applications to the cloud environment. Our method not only takes into account the cloud node qualities, but also the communication performance between different nodes. We deploy several well-known MPI programs on a real-world cloud and compare our method with other methods. The experimental results show the effectiveness of our cluster-based method.
This paper proposes an intelligent broker approach to service composition and collaboration. The broker employs a planner to generate service composition plans according to service usage and workflow knowledge, dynami...
详细信息
This paper proposes an intelligent broker approach to service composition and collaboration. The broker employs a planner to generate service composition plans according to service usage and workflow knowledge, dynamically searches for services according to the plan, then invokes and coordinates the executions of the selected services at runtime. A prototype called I-Broker has been implemented to support the approach, which can be instantiated by populating the knowledge-base with domain specific knowledge to form domain specific brokers. This paper also reports experiments that evaluate the scalability of the approach.
Increasing Internet business and computing footprint motivate server consolidation in data centers. Through virtualization technology, server consolidation can reduce physical hosts and provide scalable services. Howe...
详细信息
Increasing Internet business and computing footprint motivate server consolidation in data centers. Through virtualization technology, server consolidation can reduce physical hosts and provide scalable services. However, the ineffective memory usage among multiple virtual machines (VMs) becomes the bottleneck in server consolidation environment. Because of inaccurate memory usage estimate and the lack of memory resource managements, there is much service performance degradation in data centers, even though they have occupied a large amount of memory. In order to improve this scenario, we first introduce VM's memory division view and VM's free memory division view. Based on them, we propose a hierarchal memory service mechanism. We have designed and implemented the corresponding memory scheduling algorithm to enhance memory efficiency and achieve service level agreement. The benchmark test results show that our implementation can save 30% physical memory with 1% to 5% performance degradation. Based on Xen virtualization platform and balloon driver technology, our works actually bring dramatic benefits to commercial cloud computing center which is providing more than 2,000 VMs' services to cloud computing users.
It is quite a headache for developers to online detect performance problems in large-scale cloud computing systems. The behavior and the hidden connections among the huge amount of runtime request execution paths in c...
详细信息
It is quite a headache for developers to online detect performance problems in large-scale cloud computing systems. The behavior and the hidden connections among the huge amount of runtime request execution paths in cloud computing systems usually contain useful information for performance problem detection. In this paper, we propose an approach to rapidly diagnose the source of performance degradation in large-scale non-stop cloud computing systems. The approach first groups the user requests into categories with a fast clustering algorithm; then applies the principal components analysis to extract the primary methods; finally compares the normal and abnormal behaviors of the primary methods to localize the main cause of performance problems. We conduct extensive experiments over a real-world enterprise system providing services for the public. The results show that our approach can locate the prime causes of performance problems accurately and efficiently.
Extracting fault features with the error logs of fault injection tests has been widely studied in the area of large scale distributed systems for decades. However, the process of extracting features is severely affect...
详细信息
Extracting fault features with the error logs of fault injection tests has been widely studied in the area of large scale distributed systems for decades. However, the process of extracting features is severely affected by a large amount of noisy logs. While the existing work tries to solve the problem by compressing logs in temporal and spatial views or removing the semantic redundancy between logs, they fail to consider the co-existence of other noisy faults that generate error logs instead of injected faults, for example, random hardware faults, unexpected bugs of softwares, system configuration faults or the error rank of a log severity. During a fault feature extraction process, those noisy faults generate error logs that are not related to a target fault, and will strongly mislead the resulted fault features. We call an error log that is not related to a target fault a noisy error log. To filter out noisy error logs, we present a similarity-based error log filtering method SBF, which consists of three integrated steps: (1) model error logs into time series and use haar wavelet transform to get the approximate time series; (2) divide the approximate time series into sub time series by valleys; (3) identify noisy error logs by comparing the similarity between the sub time series of target error logs and the template of noisy error logs. We apply our log filtering method in an enterprise cloud system and show its effectiveness. Compared with the existing work, we successfully filter out noisy error logs and increase the precision and the recall rate of fault feature extraction.
When a large-scale distributed interactive simulation system is running on WAN, the sites usually disperse over a wide area in geography, which results in the simulation clock of each site is hardly to be accurately s...
详细信息
When a large-scale distributed interactive simulation system is running on WAN, the sites usually disperse over a wide area in geography, which results in the simulation clock of each site is hardly to be accurately synchronized with that of other sites. The asynchronous clocks and large transmission latency on WAN bring on a problem for the large-scale simulations to preserve the real-time causal order delivery of received events at each site. In this article, we analyze the indirect way to compare the values of asynchronous simulation clocks at first, and then propose a novel scheme which can select the reconstructible causal control information for each message so as to ensure the causal ordering of events in real time. Experiments demonstrate that the scheme can weaken the effect of network latency, reduce the overhead of the transmission amount of control information and improve the causal order consistency in asynchronous distributed simulations.
Due to the large message transmission latency in distributed Virtual Environments(DVEs) on Wide Area Net-work(WAN), the effectiveness of causality consistency control of message ordering is determined by not only caus...
详细信息
Due to the large message transmission latency in distributed Virtual Environments(DVEs) on Wide Area Net-work(WAN), the effectiveness of causality consistency control of message ordering is determined by not only causal order of messages but also the real-timeness. If merely causal order is considered, the real-time property of DVEs may not be ensured because of the unlimited waiting time for the delayed messages. While if only real-timeness is emphasized, there may be too many delayed messages, which have to be discarded, to maintain the quality of causal message ordering. Therefore, a trade-off between the quality of causal order delivery and real-timeness is necessary for DVEs. In this article, a novel causality based message ordering approach is presented. In general, this new approach dynamically balances the demands of causal order delivery and real-timeness. Experiment results demonstrate the approach can enhance the quality of causality, while simultaneously keep the real-time property of DVEs.
In order to resolve the problem of skew phenomenon in the handwritten document image during the scanning process, a new skew angle detection algorithm based on maximum gradient difference as well as Hough transform wa...
详细信息
Mining of repeated patterns from HTML documents is the key step towards Web-based data mining and knowledge extraction. Many web crawling applications need efficient repeated patterns mining techniques to generate the...
详细信息
Mining of repeated patterns from HTML documents is the key step towards Web-based data mining and knowledge extraction. Many web crawling applications need efficient repeated patterns mining techniques to generate their wrapper automatically. Existing approaches such as tree matching and string matching can detect repeated patterns with high precision, but their performance is still a challenge for practical web crawling applications. In this paper, we propose an efficient approach for mining repeated patterns based on indent shape of HTML document. Indent shape is a novel and simple model of HTML document, in which tandem repeated waves have strong association with the repeated patterns to be detected. By scanning an indent shape with a horizontal indent-line from bottom to top, the tandem repeated waves are identified by filtering the wave segments with low self-similarities. After that the boundary of HTML code corresponding to repeated patterns can be identified, which could be transformed to regular expressions formal-defined easily. Extensive experiments on two practical data sets retrieved from Internet show that our approach achieves high efficiency significantly, and its precision performance is also generally better than the existing approaches.
暂无评论