The accuracy and complexity of kernel learning algorithms is determined by the set of kernels over which it is able to optimize. An ideal set of kernels should: admit a linear parameterization (tractability);be dense ...
详细信息
The accuracy and complexity of kernel learning algorithms is determined by the set of kernels over which it is able to optimize. An ideal set of kernels should: admit a linear parameterization (tractability);be dense in the set of all kernels (accuracy);and every member should be universal so that the hypothesis space is infinite-dimensional (scalability). Currently, there is no class of kernel that meets all three criteria - e.g. Gaussians are not tractable or accurate;polynomials are not scalable. We propose a new class that meet all three criteria - the Tessellated kernel (TK) class. Specifically, the TK class: admits a linear parameterization using positive matrices;is dense in all kernels;and every element in the class is universal. This implies that the use of TK kernels for learning the kernel can obviate the need for selecting candidate kernels in algorithms such as SimpleMKL and parameters such as the bandwidth. Numerical testing on soft margin Support Vector Machine (SVM) problems show that algorithms using TK kernels outperform other kernel learning algorithms and neural networks. Furthermore, our results show that when the ratio of the number of training data to features is high, the improvement of TK over MKL increases significantly.
Linear discriminant analysis ( LDA) has been widely used for linear dimension reduction. However, LDA has limitations in that one of the scatter matrices is required to be nonsingular and the nonlinearly clustered str...
详细信息
Linear discriminant analysis ( LDA) has been widely used for linear dimension reduction. However, LDA has limitations in that one of the scatter matrices is required to be nonsingular and the nonlinearly clustered structure is not easily captured. In order to overcome the problems caused by the singularity of the scatter matrices, a generalization of LDA based on the generalized singular value decomposition ( GSVD) was recently developed. In this paper, we propose a nonlinear discriminant analysis based on the kernel method and the GSVD. The GSVD is applied to solve the generalized eigenvalue problem which is formulated in the feature space defined by a nonlinear mapping through kernel functions. Our GSVD-based kernel discriminant analysis is theoretically compared with other kernel-based nonlinear discriminant analysis algorithms. The experimental results show that our method is an effective nonlinear dimension reduction method.
The present work is concerned with the investigation of disturbances in a homogeneous, isotropic elastic medium with memory-dependent derivatives (MDDs). A one-dimensional problem is considered for a half-space whose ...
详细信息
The present work is concerned with the investigation of disturbances in a homogeneous, isotropic elastic medium with memory-dependent derivatives (MDDs). A one-dimensional problem is considered for a half-space whose surface is traction free and subjected to the effects of thermodiffusion. For treatment of time variations, the Laplace-transform technique is utilized. The theories of coupled and of generalized thermoelastic diffusion with one relaxation time follow as limit cases. A direct approach is introduced to obtain the solutions in the Laplace transform domain for different forms of kernel functions and time delay of MDDs, which can be arbitrarily chosen. Numerical inversion is carried out to obtain the distributions of the considered variables in the physical domain and illustrated graphically. Some comparisons are made and shown in figures to estimate the effects of MDD parameters on all studied fields.
Abstract: This work develops the notion of a kernel function for the heat equation in certain regions of $n + 1$-dimensional Euclidean space and applies that notion to the study of the boundary behavior of non...
详细信息
Abstract: This work develops the notion of a kernel function for the heat equation in certain regions of $n + 1$-dimensional Euclidean space and applies that notion to the study of the boundary behavior of nonnegative temperatures. The regions in question are bounded between spacelike hyperplanes and satisfy a parabolic Lipschitz condition at points on the lateral boundary. kernel functions (normalized, nonnegative temperatures which vanish on the parabolic boundary except at a single point) are shown to exist uniquely. A representation theorem for nonnegative temperatures is obtained and used to establish the existence of finite parabolic limits at the boundary (except for a set of heat-related measure zero).
For the last decade, interior-point methods that use barrier functions induced by some real univariate kernel functions have been studied. In these interior-point methods, the algorithm stops when a solution is found ...
详细信息
For the last decade, interior-point methods that use barrier functions induced by some real univariate kernel functions have been studied. In these interior-point methods, the algorithm stops when a solution is found such that it is close (in the barrier function sense) to a point in the central path with the desired accuracy. However, this does not directly imply that the algorithm generates a solution with prescribed accuracy. Until now, this had not been appropriately addressed. In this paper, we analyze the accuracy of the solution produced by the aforementioned algorithm.
We consider the relativistic generalization of the quantum A (N-1) Calogero-Sutherland models due to Ruijsenaars, comprising the rational, hyperbolic, trigonometric and elliptic cases. For each of these cases, we find...
详细信息
We consider the relativistic generalization of the quantum A (N-1) Calogero-Sutherland models due to Ruijsenaars, comprising the rational, hyperbolic, trigonometric and elliptic cases. For each of these cases, we find an exact common eigenfunction for a generalization of Ruijsenaars analytic difference operators that gives, as special cases, many different kernel functions;in particular, we find kernel functions for Chalykh-Feigin-Veselov-Sergeev-type deformations of such difference operators which generalize known kernel functions for the Ruijsenaars models. We also discuss possible applications of our results.
This article proposes a procedure for the automatic determination of the elements of the covariance matrix of the gaussian kernel function of probabilistic neural networks. Two matrices, a rotation matrix and a matrix...
详细信息
This article proposes a procedure for the automatic determination of the elements of the covariance matrix of the gaussian kernel function of probabilistic neural networks. Two matrices, a rotation matrix and a matrix of variances, can be calculated by analyzing the local environment of each training pattern. The combination of them will form the covariance matrix of each training pattern. This automation has two advantages: First, it will free the neural network designer from indicating the complete covariance matrix, and second, it will result in a network with better generalization ability than the original model. A variation of the famous two-spiral problem and real-world examples from the UCI Machine Learning Repository will show a classification rate not only better than the original probabilistic neural network but also that this model can outperform other well-known classification techniques.
Interior point methods are not only the most effective methods for solving optimisation problems in practice but they also have polynomial time complexity. However, there is still a gap between the practical behavior ...
详细信息
Interior point methods are not only the most effective methods for solving optimisation problems in practice but they also have polynomial time complexity. However, there is still a gap between the practical behavior of the interior point method algorithms and their theoretical complexity results. In this paper, by focusing on linear programming problems, we introduce a new family of kernel functions that have some simple and easy to check properties. We present a simplified analysis to obtain the complexity of generic interior point methods based on the proximity functions induced by these kernel functions. Finally, we prove that this family of kernel functions leads to improved iteration bounds of the large-update interior point methods.
The goal of the paper is to propose a continuous technique for dealing with first-order linear mixed-type functional differential equations. The approach is established via the employment of the reproducing kernel fun...
详细信息
The goal of the paper is to propose a continuous technique for dealing with first-order linear mixed-type functional differential equations. The approach is established via the employment of the reproducing kernel functions and their nice property. The error results of the tests demonstrate that this approach can give good continuous approximations to the considerable problems.
暂无评论