A model selection method based on tabu search is proposed to build support vector machines (binary decision functions) of reduced complexity and efficient generalization. the aim is to build a fast and efficient suppo...
详细信息
A model selection method based on tabu search is proposed to build support vector machines (binary decision functions) of reduced complexity and efficient generalization. the aim is to build a fast and efficient support vector machines classifier. A criterion is defined to evaluate the decision function quality which blends recognition rate and the complexity of a binary decision functions together. the selection of the simplification level by vector quantization, of a feature subset and of support vector machines hyperparameters are performed by tabu search method to optimize the defined decision function quality criterion in order to find a good sub-optimal model on tractable times.
A model selection method based on tabu search is proposed to build support vector machines (binary decision functions) of reduced complexity and efficient generalization. the aim is to build a fast and efficient suppo...
详细信息
A model selection method based on tabu search is proposed to build support vector machines (binary decision functions) of reduced complexity and efficient generalization. the aim is to build a fast and efficient support vector machines classifier. A criterion is defined to evaluate the decision function quality which blends recognition rate and the complexity of a binary decision functions together. the selection of the simplification level by vector quantization, of a feature subset and of support vector machines hyperparameters are performed by tabu search method to optimize the defined decision function quality criterion in order to find a good sub-optimal model on tractable times.
Dimensionality reduction has long been an active research topic within statistics, patternrecognition, machinelearning and datamining. It can improve the efficiency and the effectiveness of datamining by reducing ...
详细信息
ISBN:
(纸本)9780769533056
Dimensionality reduction has long been an active research topic within statistics, patternrecognition, machinelearning and datamining. It can improve the efficiency and the effectiveness of datamining by reducing the dimensions of feature space and removing the irrelevant and redundant information. In this paper we transform the attribute selection problem into the optimization problem which tries to find the attribute subset withthe maximal fractal dimension and the attribute number restriction simultaneously. In order to avoid exhaustive search in the huge attribute subset space we integrate the individual attribute priority with attribute subset evaluation for dimensionality reduction and propose the unsupervised Sequential Forward Fractal Dimensionality Reduction(SFFDR) algorithm. Our experiments on synthetic and real datasets show that the algorithm proposed can get the satisfied resulting attribute subset with a rather low time complexity.
Recently, locality sensitive discriminant analysis (LSDA) was proposed for dimensionality reduction. As far as matrix data, such as images, they are often vectorized for LSDA algorithm to rind the intrinsic manifold s...
详细信息
ISBN:
(纸本)9781424422388
Recently, locality sensitive discriminant analysis (LSDA) was proposed for dimensionality reduction. As far as matrix data, such as images, they are often vectorized for LSDA algorithm to rind the intrinsic manifold structure. Such a matrix-to-vector transform may cause the loss of some structural information residing in original 2D images. Firstly, this paper proposes an algorithm named two-dimensional locality sensitive discriminant analysis (2DLSDA), which directly extracts the proper features from image matrices based on LSDA algorithm. And the experimental results on the ORL database show the effectiveness of the proposed algorithm. After that, 2DLSDA plus Fisherface, which was presented for the further dimensionality reduction, was compared with other dimention reduction methods, namely Eigenface, LSDA and 2DLSDA plus PCA. Experiments show that conducting Fisherface after 2DLSDA achieves high recognition accuracy.
In this paper, a new concept of generalized lambda-fuzzy measure is introduced, and it is a non-negative, extended real-valued set function satisfying generalized sigma-lambda-rule, and is not necessarily monotone. th...
详细信息
ISBN:
(纸本)9781424422388
In this paper, a new concept of generalized lambda-fuzzy measure is introduced, and it is a non-negative, extended real-valued set function satisfying generalized sigma-lambda-rule, and is not necessarily monotone. the relationships between this new concept and lambda-fuzzy measure, a signed measure and so on, are discussed, and it is worth noting that any generalized lambda-fuzzy measure is a signed measure under some conditions. Moreover, an application of generalized lambda-fuzzy measure in image enhancement is given, and the corresponding experiment shows that different values of lambda have different influences on an image.
this article proposes a methodology for the analysis of the causes and types of workplace accidents (in this paper we focus specifically on floor-level falls). the approach is based on machinelearning techniques: Bay...
详细信息
this article proposes a methodology for the analysis of the causes and types of workplace accidents (in this paper we focus specifically on floor-level falls). the approach is based on machinelearning techniques: Bayesian networks trained using different algorithms (with and without a priori information), classification trees, support vector machines and extreme learningmachines. the results obtained using the different techniques are compared in terms of explanatory capacity and predictive potential, both factors facilitating the development of risk prevention measures. Bayesian networks are revealed to be the best all-round technique for this type of study, as they combine a powerful interpretative capacity with a predictive capacity that is comparable to that of the best available techniques. Moreover, the Bayesian networks force experts to apply a scientific approach to the construction and progressive enrichment of their models and also enable the basis to be laid for an accident prevention policy that is solidly grounded. Furthermore, the procedure enables better variable definition, better structuring of the data capture, coding, and quality control processes.
this paper presents a stagewise least square (SLS) loss function for classification. It uses a least square form within each stage to approximate a bounded monotonic nonconvex loss function in a stagewise manner. Seve...
详细信息
ISBN:
(纸本)9781605603179
this paper presents a stagewise least square (SLS) loss function for classification. It uses a least square form within each stage to approximate a bounded monotonic nonconvex loss function in a stagewise manner. Several benefits are obtained from using the SLS loss function, such as: (i) higher generalization accuracy and better scalability than classical least square loss;(ii) improved performance and robustness than convex loss (e.g., hinge loss of SVM);(iii) computational advantages compared with nonconvex loss (e.g. ramp loss in ψ-learning);(iv) ability to resist myopia of Empirical Risk Minimization and to boost the margin without boosting the complexity of the classifier. In addition, it naturally results in a kernel machine which is as sparse as SVM, yet much faster and simpler to train. A fast online learning algorithm with an integrated sparsification procedure is also provided. Experimental results on several benchmarks confirm the advantages of the proposed approach.
We present a framework combining information retrieval withmachinelearning and (pre-) processing for named entity recognition in order to extract events from a large document collection. the extracted events become ...
详细信息
ISBN:
(纸本)9783540698579
We present a framework combining information retrieval withmachinelearning and (pre-) processing for named entity recognition in order to extract events from a large document collection. the extracted events become input to a datamining component which delivers the final output to specific user's questions. Our case study is the public collection of minutes of plenary sessions of the German parliament and of petitions to the German parliament.
the success of kernel methods including support vector machines (SVMs) strongly depends on the design of appropriate kernels. While initially kernels were designed in order to handle fixed-lengthdata, their extension...
详细信息
ISBN:
(纸本)9781605582054
the success of kernel methods including support vector machines (SVMs) strongly depends on the design of appropriate kernels. While initially kernels were designed in order to handle fixed-lengthdata, their extension to unordered, variable-lengthdata became more than necessary for real patternrecognition problems such as object recognition and bioinformatics. We focus in this paper on object recognition using a new type of kernel referred to as "context-dependent". Objects, seen as constellations of local features (interest points, regions, etc.). are matched by minimizing an energy function mixing (1) a fidelity term which measures the quality of feature matching. (2) a neighborhood criterion which captures the object geometry and (3) a regularization term. We will show that the fixedpoint of this energy is a "context-dependent" kernel ("CDK") which also satisfies the Mercer condition. Experiments conducted on object recognition show that when plugging our kernel in SVMs, we clearly outperform SVMs with "context-free" kernels.
Classifier fusion strategies have shown great potential to enhance the performance of patternrecognition systems. there is an agreement among researchers in classifier combination that the major factor for producing ...
详细信息
暂无评论