Transfer learning is a method aimed at enhancing the estimation and prediction accuracy of a target model by transferring the source data when there is limited data available for the target domain. However, existing t...
详细信息
Transfer learning is a method aimed at enhancing the estimation and prediction accuracy of a target model by transferring the source data when there is limited data available for the target domain. However, existing transfer learning approaches often overlook the heterogeneity and heavy-tailed nature of high-dimensional data. Therefore, we consider Huber regression, which is robust to thick-tailed distributed data and outliers. In this article, based on the residual importance-based transfer learning, the constraints of a priori information are added to carry out the research of robust transfer learning, the coefficients of the target model are estimated by using Huber regression, and the effectiveness of the method is proved by simulation experiments and real cases.
In Ref. 1, Bazaraa and Goode provided an algorithm for solving a nonlinear programming problem with linear constraints. In this paper, we show that this algorithm possesses good convergence properties.
In Ref. 1, Bazaraa and Goode provided an algorithm for solving a nonlinear programming problem with linear constraints. In this paper, we show that this algorithm possesses good convergence properties.
linear constraints occur naturally in many reasoning problems and the information that they represent is often uncertain. There is a difficulty in applying AI uncertainty formalisms to this situation, as their represe...
详细信息
linear constraints occur naturally in many reasoning problems and the information that they represent is often uncertain. There is a difficulty in applying AI uncertainty formalisms to this situation, as their representation of the underlying logic, either as a mutually exclusive and exhaustive set of possibilities, or with a propositional or a predicate logic, is inappropriate (or at least unhelpful). To overcome this difficulty, we express reasoning with linear constraints as a logic, and develop the formalisms based on this different underlying logic. We focus in particular on a possibilistic logic representation of uncertain linear constraints, a lattice-valued possibilistic logic, an assumption-based reasoning formalism and a Dempster-Shafer representation, proving some fundamental results for these extended systems. Our results on extending uncertainty formalisms also apply to a very general class of underlying monotonic logics. (c) 2007 Published by Elsevier Inc.
In this paper, we propose a recurrent neural network for solving nonlinear convex programming problems with linear constraints. The proposed neural network has a simpler structure and a lower complexity for implementa...
详细信息
In this paper, we propose a recurrent neural network for solving nonlinear convex programming problems with linear constraints. The proposed neural network has a simpler structure and a lower complexity for implementation than the existing neural networks for solving such problems. It is shown here that the proposed neural network is stable in the sense of Lyapunov and globally convergent to an optimal solution within a finite time under the condition that the objective function is strictly convex. Compared with the existing convergence results, the present results do not require Lipschitz continuity condition on the objective function. Finally, examples are provided to show the applicability of the proposed neural network.
A matrix iteration algorithm was proposed by Peng (2012) for solving unconstrained matrix inequality AXE >= C, and it can be amplified to solve the matrix inequality with various linear constraints. However, a grea...
详细信息
A matrix iteration algorithm was proposed by Peng (2012) for solving unconstrained matrix inequality AXE >= C, and it can be amplified to solve the matrix inequality with various linear constraints. However, a great deal of computation for this algorithm is required because a least squares subproblem with or without constraints should be solved accurately at each iteration. In this paper, we present a modified iteration algorithm, which is a generalization of Peng's algorithm. In this iteration process, the matrix-form LSQR method is implemented to determine an approximate solution of each least squares subproblem with less computational effort. The convergence of this algorithm is analyzed and several numerical examples are presented to illustrate the efficiency of our algorithm. (C) 2014 Elsevier Inc. All rights reserved.
The efficiency of the classic alternating direction method of multipliers has been exhibited by various applications for large-scale separable optimization problems, both for convex objective functions and for nonconv...
详细信息
The efficiency of the classic alternating direction method of multipliers has been exhibited by various applications for large-scale separable optimization problems, both for convex objective functions and for nonconvex objective functions. While there are a lot of convergence analysis for the convex case, the nonconvex case is still an open problem and the research for this case is in its infancy. In this paper, we give a partial answer on this problem. Specially, under the assumption that the associated function satisfies the Kurdyka-ojasiewicz inequality, we prove that the iterative sequence generated by the alternating direction method converges to a critical point of the problem, provided that the penalty parameter is greater than 2L, where L is the Lipschitz constant of the gradient of one of the involved functions. Under some further conditions on the problem's data, we also analyse the convergence rate of the algorithm.
An implementation of the reduced gradient algorithm is proposed to solve the linear L, estimation problem (least absolute deviations regression) with linear equality or inequality constraints, including rank deficient...
详细信息
An implementation of the reduced gradient algorithm is proposed to solve the linear L, estimation problem (least absolute deviations regression) with linear equality or inequality constraints, including rank deficient and degenerate cases. Degenerate points are treated by solving a derived L, problem to give a descent direction. The algorithm is a direct descent, active set method that is shown to be finite, It is geometrically motivated and simpler than the projected gradient algorithm (PGA) of Bartels, Conn and Sinclair, which uses a penalty function approach for the constrained case. Computational experiments indicate that the proposed algorithm compares favourably, both in reliability and efficiency, to the PGA, to the algorithms ACM551 and AFK (which use an LP formulation of the L-1 problem) and to LPASL1 (which is based on the Huber approximation method of Madsen, Nielsen and Pinar). Although it is not as efficient as ACM552 (Barrodale-Roberts algorithm) on large scale unconstrained problems, it performs better on large scale problems with bounded variable constraints. (C) 2002 Elsevier Science B,V. All rights reserved.
An efficient supervisor synthesis method is proposed based on the constraint transformation, since enforcing a logic expression of linear constraints on Petri nets essentially involves finding admissible constraints. ...
详细信息
An efficient supervisor synthesis method is proposed based on the constraint transformation, since enforcing a logic expression of linear constraints on Petri nets essentially involves finding admissible constraints. It consists of cooperative algorithms that are used to obtain the logic expression of constraints for any set of forbidden markings, to remove redundant constraints using the quantity information of constraints, to judge the existence of a permissive supervisor, to equivalently reduce multiple constraints into a single one using net structural properties, and to approach the admissible constraints from the original ones for ordinary Petri nets. These algorithms are derived from a series of theorems and lemmas presented to construct the theory of constraint transformation or reduction. In addition, the linear constraint, whose uncontrollable subnet is a forward-concurrent free net, can be equivalently transformed into an admissible linear constraint using this method.
In practice, especially in engineering, we often encounter the linear absolute-value objective-function problems with linear constraints, such as ∑| X i − C i |. Using powerful simplex algorithm, we derive a variable...
详细信息
In practice, especially in engineering, we often encounter the linear absolute-value objective-function problems with linear constraints, such as ∑| X i − C i |. Using powerful simplex algorithm, we derive a variable transformation method to solve it.
In this paper, the interval-valued optimization problem is converted to a general problem in the parametric form and its solution is efficient. We present a one-layer recurrent neural network for solving this interval...
详细信息
In this paper, the interval-valued optimization problem is converted to a general problem in the parametric form and its solution is efficient. We present a one-layer recurrent neural network for solving this interval-valued optimization problem with linear constraints. Based on this approach, we prove that the recurrent neural network is stable in the sense of Lyapunov and the equilibrium point of the neural network is globally convergent to the optimal solution. The proposed approach improves the algorithm for the interval-valued optimization and the model is easy to implement. Finally, two numerical examples are provided to show the feasibility and effectiveness of the proposed approach.
暂无评论