Maximum distribution reduction can obtain maximum credibility rules from an inconsistent information system. the paper analyzes various situations may be encountered when incremental objects add to an inconsistent inf...
详细信息
ISBN:
(纸本)9781424469864;9788988678206
Maximum distribution reduction can obtain maximum credibility rules from an inconsistent information system. the paper analyzes various situations may be encountered when incremental objects add to an inconsistent information system, and then provides ways about how to incrementally update discernibility formula. the analysis results show that, in most cases, withthe incremental object coming, discernibility formula only need incremental updating, But when incremental object causes values of maximum credibility function of two equivalence classes from different to same, because there is no inverse operation of conjunctive operation, incremental computing can not be performed. Complexity of the problem is analyzed finally.
To detect internet worm, many academic approaches have been proposed. In this paper, we provide a new approach to detect internet worm. We consider behaviors of internet worm that is different from the normal pattern ...
详细信息
ISBN:
(纸本)9781424469864;9788988678206
To detect internet worm, many academic approaches have been proposed. In this paper, we provide a new approach to detect internet worm. We consider behaviors of internet worm that is different from the normal pattern of internet activities. We consider all network packets before they reach to the end-user by extracting a certain number of features of internet worm from these packets. Our network features mainly consist of characteristics of IP address, port, protocol and some flags of packet header. these features are used to detect and classify behavior of internet worm by using 3 different datamining algorithms which are Bayesian Network, Decision Tree and Random Forest. In addition, our approach not only can classify internet worm apart from the normal data, but also can classify network attacks that have similar behaviors to the internet worm behaviors. Our approach provides good results with detection rate over 99.6 percent and false alarm rate is close to zero with Random forest algorithm. In addition, our model can classify behaviors of DoS and Port Scan attacks with detection rate higher than 98 percent and false alarm rate equal to zero.
Identification and assignment of (potential) experts to subject field is an important task in various settings and environments. In scientific domain, the identification of experts is normally based on number of facto...
详细信息
Identification and assignment of (potential) experts to subject field is an important task in various settings and environments. In scientific domain, the identification of experts is normally based on number of factors like: number of publications, citation record, and experience etc. However, the discovered experts cannot be assigned reviewing duties immediately. One also need further information about expert like the country, university, service record, contributions, honors, and name of conferences/journals where the discovered expert is already serving as editor/reviewer. To some extent, this information can be found from search engines using heuristics, by applying Natural Language Processing, and machinelearning techniques. However, the emergence of many semantically rich and structured datasets from Linked Open data movement (LOD) can facilitate in more controlled search and fruitful results. this paper employs an automatic technique to find the required information about experts using LOD dataset. the expert profile is discovered, aggregated, clustered, structured, and visualized to the administration of peer-review system. the system has been implemented for an electronic journal such as Journal of Universal Computer Science (***). the proposed system facilitates *** administration to find potential reviewers for scientific papers to assign reviewing duties and to call new editors for computer science topics.
Traditional kernelised classification methods Could not perforin well sometimes because of the using of a single and fixed kernel, especially oil sonic complicated data sets. In this paper. a novel optimal double-kern...
详细信息
ISBN:
(纸本)9783642030697
Traditional kernelised classification methods Could not perforin well sometimes because of the using of a single and fixed kernel, especially oil sonic complicated data sets. In this paper. a novel optimal double-kernel combination (ODKC) method is proposed for complicated classification tasks. Firstly, data sets are mapped by two basic kernels into different feature spaces respectively, and then three kinds of optimal composite kernels are constructed by integrating information of the two feature spaces. Comparative experiments demonstrate the effectiveness of our methods.
this work presents an image analysis framework driven by emerging evidence and constrained by the semantics expressed in an ontology. Human perception, apart from visual stimulus and patternrecognition, relies also o...
详细信息
ISBN:
(纸本)9783642030697
this work presents an image analysis framework driven by emerging evidence and constrained by the semantics expressed in an ontology. Human perception, apart from visual stimulus and patternrecognition, relies also on general knowledge and application context for understanding visual content in conceptual terms. Our work is an attempt to imitate this behavior by devising an evidence driven probabilistic, inference framework using ontologies and bayesian networks. Experiments conducted for two different image analysis, tasks showed improvement performance, compared to the case where computer vision techniques act isolated from any type of knowledge or context.
No-regret algorithms for online convex optimization are potent online learning tools and have been demonstrated to be successful in a wide-ranging number of applications. Considering affine and external regret, we, in...
详细信息
ISBN:
(纸本)9783642030697
No-regret algorithms for online convex optimization are potent online learning tools and have been demonstrated to be successful in a wide-ranging number of applications. Considering affine and external regret, we, investigate what happens when a set of no-regret learners (voters) merge their respective decisions in each learning iteration to a single, common one in form of a convex combination. We show that an agent (or algorithm) that executes this merged decision in each iteration of the online learning process and each time feeds back a copy of its own reward function to the voters, incurs sublinear regret itself. As a by-product, we obtain a simple method that allows us to construct new no-regret algorithms out of known ones.
In this paper we present a comparative analysis of two types of remote sensing satellite data by using the wavelet-based datamining techniques. the analyzed results reveal that the anomalous variations exist related ...
详细信息
ISBN:
(纸本)9783642030697
In this paper we present a comparative analysis of two types of remote sensing satellite data by using the wavelet-based datamining techniques. the analyzed results reveal that the anomalous variations exist related to the earthquakes. the methods studied in this work include wavelet transformations and spatial/temporal continuity analysis of wavelet maxima. these methods have been used to analyze the singularities of seismic anomalies in remote sensing satellite data, which are associated with file two earthquakes of Wenchuan and Pure recently occurred in China.
We face the problem of novelty detection from stream data, that is, the identification of new or unknown situations in an ordered sequence of objects which arrive on-line, at consecutive time points. We extend previou...
详细信息
ISBN:
(纸本)9783642030697
We face the problem of novelty detection from stream data, that is, the identification of new or unknown situations in an ordered sequence of objects which arrive on-line, at consecutive time points. We extend previous solutions by considering the case of objects modeled by multiple database relations. Frequent relational patterns are efficiently extracted at each time point, and a time window is used to filter out novelty patterns. An application of the proposed algorithm to the problem of detecting anomalies in network traffic is described and quantitative and qualitative results obtained by analyzing real stream of data collected from the firewall logs are reported.
datamining is the process of extracting interesting information from large sets of data. Outliers are defined as events that occur very infrequently. Detecting outliers before they escalate with potentially catastrop...
详细信息
ISBN:
(纸本)9783642030697
datamining is the process of extracting interesting information from large sets of data. Outliers are defined as events that occur very infrequently. Detecting outliers before they escalate with potentially catastrophic consequences is very important for various real life applications such as in the field of fraud detection, network robustness analysis, and intrusion detection. this paper presents a comprehensive analysis of three Outlier detection methods Extensible Markov Model (EMM), Local Outlier Factor (LOF) and LCS-Mine, where algorithm analysis shows the time complexity analysis and outlier detection accuracy. the experiments conducted with Ozone level Detection, IR video trajectories, and 1999 and 2000 DARPA DDoS datasets demonstrate that EMM outperforms both LOF and LSC-Mine in both time and outlier detection accuracy.
暂无评论