Historically, the empirical risk of a pattern classifier was asked to be made zero, therefor the default property of training samples were limited to a separable ones. Nowadays on the contrary, the major idea of learn...
详细信息
Historically, the empirical risk of a pattern classifier was asked to be made zero, therefor the default property of training samples were limited to a separable ones. Nowadays on the contrary, the major idea of learning classification no longer ask the empirical risk of classifier must be made zero. In this situation, inseparable feature sets may not be detrimental to the performance of classifier. However, so far no experimental studies and analytical results show whether an inseparable feature set is available or not. This paper firstly analyzes the interaction between learning algorithms and feature selection, and gives a proof by both the analytical analysis and experimental studies.
Parzen windows estimation is one of the classical non- parametric methods in the field of machine learning and pattern classification, and usually uses Gaussian density function as the kernel. Although the relation be...
详细信息
Parzen windows estimation is one of the classical non- parametric methods in the field of machine learning and pattern classification, and usually uses Gaussian density function as the kernel. Although the relation between the kernel density estimation (KDE) and low-pass filtering is well known, it is vary difficult to setting the parameters of the other kinds of density functions. This paper proposes a novel method to deal with the parameters of Laplace kernel through measuring the degree of exchanged information among interpolating points. Experimental results showed that the proposed method can improve the performance of Parzen windows significantly.
Existing video research incorporates the use of relevance feedback based on user-dependent interpretations to improve the retrieval results. In this paper, we segregate the process of relevance feedback into 2 distinc...
详细信息
ISBN:
(纸本)9781595937025
Existing video research incorporates the use of relevance feedback based on user-dependent interpretations to improve the retrieval results. In this paper, we segregate the process of relevance feedback into 2 distinct facets: (a) recall-directed feedback;and (b) precision-directed feedback. The recall-directed facet employs general features such as text and high level features (HLFs) to maximize efficiency and recall during feedback, making it very suitable for large corpuses. The precision-directed facet on the other hand uses many other multimodal features in an active learning environment for improved accuracy. Combined with a performance-based adaptive sampling strategy, this process continuously re-ranks a subset of instances as the user annotates. Experiments done using TRECVID 2006 dataset show that our approach is efficient and effective. Copyright 2007 ACM.
In this paper a novel direct clustering algorithm based on generalized information distance (GID) is put forward. Firstly, based on information theory, a basic concept of measure of diversity is given and an inequalit...
详细信息
In this paper a novel direct clustering algorithm based on generalized information distance (GID) is put forward. Firstly, based on information theory, a basic concept of measure of diversity is given and an inequality about measure of diversity is proved. Based on this inequality, a concept of increment of diversity is discussed and a defined. Secondly, by analyzing distance measure, two new concepts of generalized information distance (GID) and improved generalized information distance (IGID) are proposed, and a new direct clustering algorithm based on GID and IGID is designed. Finally this algorithm is applied to soil fertility data processing, and compared with hierarchical clustering algorithm (HCA). The results of simulation application show that the algorithm presented here is feasible and effective. Because of simplicity of algorithm and robustness. It provides a new research approach for studies of pattern recognition theory.
To solve the time-consuming problem of the fitness assignment in the multi-objective evolutionary algorithm, this paper proposes a novel fitness assignment-dominating tree. The dominating tree preserves the necessary ...
详细信息
To solve the time-consuming problem of the fitness assignment in the multi-objective evolutionary algorithm, this paper proposes a novel fitness assignment-dominating tree. The dominating tree preserves the necessary relationships among individuals, contains the density information implicitly, and reduces the comparisons among individuals distinctly. In addition, a smart eliminating strategy based on the dominating tree maintains the diversity of the population without extra expenses. A new multi-objective evolutionary algorithm based on dominating tree is proposed on these innovations. By examining three performance metrics on six test problems, the new algorithm is found to be competitive with SPEA2 and NSGA-II in terms of converging to the true Pareto front and maintaining the diversity of the population, moreover, it is much faster than other two algorithms.
Traditional RBAC model describes a static access control policy. As the development of network application, such as Web services, access control faces many new challenges, one of which is that access control policies ...
详细信息
ISBN:
(纸本)9780769530482;0769530486
Traditional RBAC model describes a static access control policy. As the development of network application, such as Web services, access control faces many new challenges, one of which is that access control policies need to protect not only static resources but also dynamic ones that are encapsulated in a service. In order to capture the flexibility of application, we specify a fine-grained control on individual users by introducing user attributes which are associated to user's role and permission. We take the service as an action that changes some of user's attributes so as to adjust users' permission at run. In order to represent and reason on the access control automatically, we use the description logics combined with prepositional dynamic logic as a logic framework to construct a knowledge base for the access control and action rules, and semantically explain how a user's permission can be changed at runtime.
keyword auto-extracting is focused by researchers on information retrieval, data mining, chance discovery and others application. In this paper, new algorithm, CCG(Cognition & Concept Graph, for text chance discov...
详细信息
keyword auto-extracting is focused by researchers on information retrieval, data mining, chance discovery and others application. In this paper, new algorithm, CCG(Cognition & Concept Graph, for text chance discovery is presented based on cognition with data depth as measurement. When the keywords in a document are treated as chances in the document, those keywords can be extracted by CGC automatically. In CGC, concepts of a document are represented as maximum connected sub graphs of the basic graph for the document and the cognition of reader/author on a term is weighted with data depth. The correlation for word and concept is defined and the formula for the correlation calculating is given. Experimental results show that keywords extracted by CCG can describe the document and author/reader's cognition much better than keywords extracted by others technologies such as frequency accumulating or key Graph.
Due to the existence of a large amount of legacy information systems, how to obtain the information and integrate the legacy systems is becoming more and more concerned. This paper introduces the integration pattern b...
详细信息
Due to the existence of a large amount of legacy information systems, how to obtain the information and integrate the legacy systems is becoming more and more concerned. This paper introduces the integration pattern based on agent grid. And we propose an agent grid intelligent platform called AGrIP, which can erase information islands and integrate external systems efficiently by encapsulating the distributed application system to agents. AGrIP adopts distributed hierarchical structure, which is capable of integrating external systems to provide the users various services dynamically. AGrIP has proved itself scalable and efficient during the industry projects development and application.
Fuzzy information measures play an important part in measuring the similarity degree between two pattern vectors in fuzzy circumstance. In this paper, two new fuzzy information measures are set up. Firstly, the classi...
详细信息
Fuzzy information measures play an important part in measuring the similarity degree between two pattern vectors in fuzzy circumstance. In this paper, two new fuzzy information measures are set up. Firstly, the classical similarity measures, such as dissimilarity measure (DM) and similarity measure (SM) are studied, an axiom theory about fuzzy entropy is surveyed, and all kinds of definitions of fuzzy entropy are discussed. Secondly, based on the idea of Shannon information entropy, two concepts of fuzzy joint entropy and fuzzy conditional entropy are proposed and the basic properties of them are given and proved. At last, two new measures, fuzzy absolute information measure (FAIM) and fuzzy relative information measure (FRIM), are set up, which can be used to measure the similarity degree between a fuzzy set A and a fuzzy set B. So, It provides a new research approach for studies on pattern similarity measure.
In this paper, based on 3d wavelet moments we present a new method called fractal scale descriptors for 3d objects. Just like wavelet moments, they are still robust to translation, rotation and scale, and have the mul...
详细信息
In this paper, based on 3d wavelet moments we present a new method called fractal scale descriptors for 3d objects. Just like wavelet moments, they are still robust to translation, rotation and scale, and have the multi-resolution features in the radial direction, which can handle noise to some extent and provide multi-level features to satisfy various requirements. Furthermore the new method is prior to the original 3d wavelet moments in computational complexity by using the fast algorithm of the spherical harmonics together with the Mallat algorithm of the wavelets.
暂无评论