In this article, we study the optimal designs of the positive definite kernels for the high-dimensional interpolation. We endow the Sobolev spaces with the probability measures induced by the positive definite kernels...
详细信息
In this article, we study the optimal designs of the positive definite kernels for the high-dimensional interpolation. We endow the Sobolev spaces with the probability measures induced by the positive definite kernels such that the kernel-based estimators can be solved to maximize the kernel-based probabilities conditioned on the observed data. In the practical implementations, we have many choices of the positive definite kernels to construct the kernel basis such as the Gaussian kernels with various shape parameters;hence we have an open problem what the optimal kernels are. The kernel-based probabilities provide a novel way to search the optimal kernels for the observed data. Combining with the statistical techniques such as the maximum likelihood estimation, we can solve the optimal shape parameters of the Gaussian kernels by the kernel-based probabilities even though the classical kernelbased methods cannot achieve the uncertain data. (C) 2015 Elsevier Inc. All rights reserved.
In this article, we study the optimal designs of the positive definite kernels for the high-dimensional interpolation. We endow the Sobolev spaces with the probability measures induced by the positive definite kernels...
详细信息
In this article, we study the optimal designs of the positive definite kernels for the high-dimensional interpolation. We endow the Sobolev spaces with the probability measures induced by the positive definite kernels such that the kernel-based estimators can be solved to maximize the kernel-based probabilities conditioned on the observed data. In the practical implementations, we have many choices of the positive definite kernels to construct the kernel basis such as the Gaussian kernels with various shape parameters;hence we have an open problem what the optimal kernels are. The kernel-based probabilities provide a novel way to search the optimal kernels for the observed data. Combining with the statistical techniques such as the maximum likelihood estimation, we can solve the optimal shape parameters of the Gaussian kernels by the kernel-based probabilities even though the classical kernelbased methods cannot achieve the uncertain data. (C) 2015 Elsevier Inc. All rights reserved.
based on kernel-based approximation technique, we devise in this paper an efficient and accurate numerical scheme for solving a backward space-time fractional diffusion problem (BSTFDP). The kernels used in the approx...
详细信息
based on kernel-based approximation technique, we devise in this paper an efficient and accurate numerical scheme for solving a backward space-time fractional diffusion problem (BSTFDP). The kernels used in the approximation are the fundamental solutions of the space-time fractional diffusion equation expressed in terms of inverse Fourier transform of Mittag-Leffler functions. The use of Inverse fast Fourier transform (IFFT) technique enables an accurate and efficient evaluation of the fundamental solutions and gives a robust numerical algorithm for the solution of the BSTFDP. Since the BSTFDP is intrinsic ill posed, we apply the standard Tikhonov regularization technique to obtain a stable solution to the highly ill-conditioned resultant system of linear equations. For choosing optimal regularization parameter, we combine the regularization technique with the generalized cross validation (GCV) method for an optimal placement of the source points in the use of fundamental solutions. Meanwhile, the proposed algorithm also speeds up the previous method given in Dou and Hon (2014). Several numerical examples are constructed to verify the accuracy and efficiency of the proposed method. (C) 2015 Elsevier Ltd. All rights reserved.
kernel-based reinforcement learning (KBRL) stands out among approximate reinforcement learning algorithms for its strong theoretical guarantees. By casting the learning problem as a local kernelapproximation, KBRL pr...
详细信息
kernel-based reinforcement learning (KBRL) stands out among approximate reinforcement learning algorithms for its strong theoretical guarantees. By casting the learning problem as a local kernelapproximation, KBRL provides a way of computing a decision policy which converges to a unique solution and is statistically consistent. Unfortunately, the model constructed by KBRL grows with the number of sample transitions, resulting in a computational cost that precludes its application to large-scale or on-line domains. In this paper we introduce an algorithm that turns KBRL into a practical reinforcement learning tool. kernel-based stochastic factorization (KBSF) builds on a simple idea: when a transition probability matrix is represented as the product of two stochastic matrices, one can swap the factors of the multiplication to obtain another transition matrix, potentially much smaller than the original, which retains some fundamental properties of its precursor. KBSF exploits such an insight to compress the information contained in KBRL's model into an approximator of fixed size. This makes it possible to build an approximation considering both the difficulty of the problem and the associated computational cost. KBSF's computational complexity is linear in the number of sample transitions, which is the best one can do without discarding data. Moreover, the algorithm's simple mechanics allow for a fully incremental implementation that makes the amount of memory used independent of the number of sample transitions. The result is a kernel-based reinforcement learning algorithm that can be applied to large-scale problems in both off-line and on-line regimes. We derive upper bounds for the distance between the value functions computed by KBRL and KBSF using the same data. We also prove that it is possible to control the magnitude of the variables appearing in our bounds, which means that, given enough computational resources, we can make KBSF's value function as close as
kernel-based reinforcement learning (KBRL) stands out among approximate reinforcement learning algorithms for its strong theoretical guarantees. By casting the learning problem as a local kernelapproximation, KBRL pr...
详细信息
kernel-based reinforcement learning (KBRL) stands out among approximate reinforcement learning algorithms for its strong theoretical guarantees. By casting the learning problem as a local kernelapproximation, KBRL provides a way of computing a decision policy which converges to a unique solution and is statistically consistent. Unfortunately, the model constructed by KBRL grows with the number of sample transitions, resulting in a computational cost that precludes its application to large-scale or on-line domains. In this paper we introduce an algorithm that turns KBRL into a practical reinforcement learning tool. kernel-based stochastic factorization (KBSF) builds on a simple idea: when a transition probability matrix is represented as the product of two stochastic matrices, one can swap the factors of the multiplication to obtain another transition matrix, potentially much smaller than the original, which retains some fundamental properties of its precursor. KBSF exploits such an insight to compress the information contained in KBRL's model into an approximator of fixed size. This makes it possible to build an approximation considering both the difficulty of the problem and the associated computational cost. KBSF's computational complexity is linear in the number of sample transitions, which is the best one can do without discarding data. Moreover, the algorithm's simple mechanics allow for a fully incremental implementation that makes the amount of memory used independent of the number of sample transitions. The result is a kernel-based reinforcement learning algorithm that can be applied to large-scale problems in both off-line and on-line regimes. We derive upper bounds for the distance between the value functions computed by KBRL and KBSF using the same data. We also prove that it is possible to control the magnitude of the variables appearing in our bounds, which means that, given enough computational resources, we can make KBSF's value function as close as
based on kernel-based approximation technique, we devise in this paper an efficient and accurate numerical scheme for solving a backward problem of time-fractional diffusion equation (BTFDE). The kernels used in the a...
详细信息
based on kernel-based approximation technique, we devise in this paper an efficient and accurate numerical scheme for solving a backward problem of time-fractional diffusion equation (BTFDE). The kernels used in the approximation are the fundamental solutions of the time-fractional diffusion equation which can be expressed in terms of the M-Wright functions. To stably and accurately solve the resultant highly ill-conditioned system of equations, we successfully combine the standard Tikhonov regularization technique and the L-curve method to obtain an optimal choice of the regularization parameter and the location of source points. Several 1D and 2D numerical examples are constructed to demonstrate the superior accuracy and efficiency of the proposed method for solving both the classical backward heat conduction problem (BHCP) and the BTFDE. (C) 2013 Elsevier Ltd. All rights reserved.
This paper presents a general approach toward the optimal selection and ensemble (weighted average) of kernel-based approximations to address the issue of model selection. That is, depending on the problem under consi...
详细信息
This paper presents a general approach toward the optimal selection and ensemble (weighted average) of kernel-based approximations to address the issue of model selection. That is, depending on the problem under consideration and loss function, a particular modeling scheme may outperform the others, and, in general, it is not known a priori which one should be selected. The surrogates for the ensemble are chosen based on their performance, favoring non-dominated models, while the weights are adaptive and inversely proportional to estimates of the local prediction variance of the individual surrogates. Using both well-known analytical test functions and, in the surrogate-based modeling of a field scale alkali-surfactant-polymer enhanced oil recovery process, the ensemble of surrogates, in general, outperformed the best individual surrogate and provided among the best predictions throughout the domains of interest.
暂无评论