作者:
Heas, P.Herzet, C.Combes, B.Univ Rennes
INRIA 263 Ave Gen LeclercCampus Beaulieu F-35042 Rennes France Univ Rennes
IRMAR 263 Ave Gen LeclercCampus Beaulieu F-35042 Rennes France Univ Rennes
IRISA 263 Ave Gen LeclercCampus Beaulieu F-35042 Rennes France Univ Rennes
Ensai CNRS CRESTUMR 9194 Rennes France
Reduced modeling of a computationally demanding dynamical system aims at approximating its trajectories, while optimizing the trade-off between accuracy and computational complexity. In this work, we propose to achiev...
详细信息
Reduced modeling of a computationally demanding dynamical system aims at approximating its trajectories, while optimizing the trade-off between accuracy and computational complexity. In this work, we propose to achieve such an approximation by first embedding the trajectories in a reproducing kernel Hilbert space (RKHS), which has interesting approximation and calculation capabilities, and then solving the associated reduced model problem. More specifically, we propose a new efficient algorithm for data-driven reduced modeling of nonlinear dynamics based on linear approximations in a RKHS. This algorithm takes advantage of the closed-form solution of a low-rank constraint optimization problem while exploiting advantageously kernel-based computations. Reduced modeling with this algorithm reveals a gain in approximation accuracy, as shown by numerical simulations, and in complexity with respect to existing approaches.
Interacting particle systems (IPSs) are a very important class of dynamical systems, arising in different domains like biology, physics, sociology and engineering. In many applications, these systems can be very large...
详细信息
Interacting particle systems (IPSs) are a very important class of dynamical systems, arising in different domains like biology, physics, sociology and engineering. In many applications, these systems can be very large, making their simulation and control, as well as related numerical tasks, very challenging. kernel methods, a powerful tool in machine learning, offer promising approaches for analyzing and managing IPS. This paper provides a comprehensive study of applying kernel methods to IPS, including the development of numerical schemes and the exploration of mean-field limits. We present novel applications and numerical experiments demonstrating the effectiveness of kernel methods for surrogate modelling and state-dependent feature learning in IPS. Our findings highlight the potential of these methods for advancing the study and control of large-scale IPS.
We show that hybrid quantum classifiers based on quantum kernel methods and support vector machines are vulnerable against adversarial attacks, namely small engineered perturbations of the input data can deceive the c...
详细信息
We show that hybrid quantum classifiers based on quantum kernel methods and support vector machines are vulnerable against adversarial attacks, namely small engineered perturbations of the input data can deceive the classifier into predicting the wrong result. Nonetheless, we also show that simple defense strategies based on data augmentation with a few crafted perturbations can make the classifier robust against new attacks. Our results find applications in security-critical learning problems and in mitigating the effect of some forms of quantum noise, since the attacker can also be understood as part of the surrounding environment.
We introduce a data-driven model approximation method for nonlinear control systems, drawing on recent progress in machine learning and statistical-dimensionality reduction. The method is based on embedding the nonlin...
详细信息
We introduce a data-driven model approximation method for nonlinear control systems, drawing on recent progress in machine learning and statistical-dimensionality reduction. The method is based on embedding the nonlinear system in a high- (or infinite-) dimensional reproducing kernel Hilbert space (RKHS) where linear balanced truncation may be carried out implicitly. This leads to a nonlinear reduction map which can be combined with a representation of the system belonging to an RKHS to give a closed, reduced order dynamical system which captures the essential input-output characteristics of the original model. Working in RKHS provides a convenient, general functional-analytical framework for theoretical understanding. Empirical simulations illustrating the approach are also provided.
This paper introduces two feature selection methods to deal with heterogeneous data that include continuous and categorical variables. We propose to plug a dedicated kernel that handles both kinds of variables into a ...
详细信息
This paper introduces two feature selection methods to deal with heterogeneous data that include continuous and categorical variables. We propose to plug a dedicated kernel that handles both kinds of variables into a Recursive Feature Elimination procedure using either a non-linear SVM or Multiple kernel Learning. These methods are shown to offer state-of-the-art performances on a variety of high-dimensional classification tasks. (C) 2015 Elsevier B.V. All rights reserved.
kernel methods have become an increasingly popular tool for machine learning tasks such as classification, regression or novelty detection. They exhibit good generalization performance on many real-life datasets, ther...
详细信息
kernel methods have become an increasingly popular tool for machine learning tasks such as classification, regression or novelty detection. They exhibit good generalization performance on many real-life datasets, there are few free parameters to adjust and the architecture of the learning machine does not need to be found by experimentation. In this tutorial, we survey this subject with a principal focus on the most well-known models based on kernel substitution, namely, support vector machines. (C) 2002 Elsevier Science B.V. All rights reserved.
The success of support vector machine (SVM) has given rise to the development of a new class of theoretically elegant learning machines which use a central concept of kernels and the associated reproducing kernel Hilb...
详细信息
The success of support vector machine (SVM) has given rise to the development of a new class of theoretically elegant learning machines which use a central concept of kernels and the associated reproducing kernel Hilbert space (RKHS). Exponential families, a standard tool in statistics, can be used to unify many existing machine learning algorithms based on kernels (such as SVM) and to invent novel ones quite effortlessly. A new derivation of the novelty detection algorithm based on the one class SVM is proposed to illustrate the power of the exponential family model in an RKHS. (c) 2005 Published by Elsevier B.V.
In this paper, we develop an approach to exploiting kernel methods with manifold-valued data. In many computer vision problems, the data can be naturally represented as points on a Riemannian manifold. Due to the non-...
详细信息
In this paper, we develop an approach to exploiting kernel methods with manifold-valued data. In many computer vision problems, the data can be naturally represented as points on a Riemannian manifold. Due to the non-Euclidean geometry of Riemannian manifolds, usual Euclidean computer vision and machine learning algorithms yield inferior results on such data. In this paper, we define Gaussian radial basis function (RBF)-based positive definite kernels on manifolds that permit us to embed a given manifold with a corresponding metric in a high dimensional reproducing kernel Hilbert space. These kernels make it possible to utilize algorithms developed for linear spaces on nonlinear manifold-valued data. Since the Gaussian RBF defined with any given metric is not always positive definite, we present a unified framework for analyzing the positive definiteness of the Gaussian RBF on a generic metric space. We then use the proposed framework to identify positive definite kernels on two specific manifolds commonly encountered in computer vision: the Riemannian manifold of symmetric positive definite matrices and the Grassmann manifold, i.e., the Riemannian manifold of linear subspaces of a Euclidean space. We show that many popular algorithms designed for Euclidean spaces, such as support vector machines, discriminant analysis and principal component analysis can be generalized to Riemannian manifolds with the help of such positive definite Gaussian kernels.
Recently, genome wide DNA markers have been used in breeding value estimation of livestock species. The computational technique is known as genomic selection. Typically, a large number of marker effects are estimated ...
详细信息
Recently, genome wide DNA markers have been used in breeding value estimation of livestock species. The computational technique is known as genomic selection. Typically, a large number of marker effects are estimated from a small number of animals, which presents an under-determined problem. In this paper, we propose sparse marker selection methods using haplotypes for both breeding value estimation and QTL mapping. By applying a two-stage regression strategy, markers are selected in the first stage, then in the second stage the selected markers are fitted in a range of models including linear, kernel and semi-parametric models. The estimation accuracy of breeding values is measured by the correlation coefficient, as well as the regression coefficient, between the true breeding values and the estimated breeding values by the models. We show that the estimation accuracy by using sparse markers, as low as 5000 or 500 dimensions, is comparable to that obtained from genome wide markers of about 230,000 dimensions of DNA haplotypes. The selected sparse markers can also be used for QTL mapping. In this paper we use protein yield to demonstrate the methods, and show that loci of large effects confirm published QTL. (C) 2013 Elsevier Inc. All rights reserved.
Portfolio optimization problem has been studied extensively. In this paper, we look at this problem from a different perspective. Several researchers argue that the USA equity market is efficient. Some of the studies ...
详细信息
Portfolio optimization problem has been studied extensively. In this paper, we look at this problem from a different perspective. Several researchers argue that the USA equity market is efficient. Some of the studies show that the stock market is not efficient around the earning season. Based on these findings, we formulate the problem as a classification problem by using state of the art machine learning techniques such as minimax probability machine (MPM) and support vector machines (SVM). The MPM method finds a bound on the misclassification probabilities. On the other hand, SVM finds a hyperplane that maximizes the distance between two classes. Both methods prove similar results for short-term portfolio management. (c) 2005 Elsevier Ltd. All rights reserved.
暂无评论