This paper presents a regularized Newton method (RNM) with generalized regularization terms for unconstrained convex optimization problems. The generalized regularization includes quadratic, cubic, and elastic net reg...
详细信息
This paper presents a regularized Newton method (RNM) with generalized regularization terms for unconstrained convex optimization problems. The generalized regularization includes quadratic, cubic, and elastic net regularizations as special cases. Therefore, the proposed method serves as a general framework that includes not only the classical and cubic RNMs but also a novel RNM with elastic net regularization. We show that the proposed RNM has the global @ ( k -2 ) and local superlinear convergence, which are the same as those of the cubic RNM.
Accelerated gradient methods have the potential of achieving optimal convergence rates and have successfully been used in many practical applications. Despite this fact, the rationale underlying these accelerated meth...
详细信息
Accelerated gradient methods have the potential of achieving optimal convergence rates and have successfully been used in many practical applications. Despite this fact, the rationale underlying these accelerated methods remain elusive. In this work, we study gradient-based accelerated optimization methods obtained by directly discretizing a second-order ordinary differential equation (ODE) related to the Bregman Lagrangian. We show that for sufciently smooth objectives, the acceleration can be achieved by discretizing the proposed ODE using s-stage q-order implicit Runge-Kutta integrators. In particular, we prove that under the assumption of convexity and sufcient smoothness, the sequence of iteration generated by the proposed accelerated method stably converges to the optimal solution at a rate of O((1 p;q L)NN), where p 2 is the parameter in the second-order ODE and CQ p;q is a constant depending on p and q. Several numerical experiments are given to verify the convergence results.
In this paper, we present a regularized Newton method (M-RNM) with correction for minimizing a convex function whose Hessian matrices may be singular. At every iteration, not only a RNM step is computed but also two c...
详细信息
In this paper, we present a regularized Newton method (M-RNM) with correction for minimizing a convex function whose Hessian matrices may be singular. At every iteration, not only a RNM step is computed but also two correction steps are computed. We show that if the objective function is LC2, then the method posses globally convergent. Numerical results show that the new algorithm performs very well.
We propose a new technique for minimization of convex functions not necessarily smooth. Our approach employs an equivalent constrained optimization problem and approximated linear programs obtained with cutting planes...
详细信息
We propose a new technique for minimization of convex functions not necessarily smooth. Our approach employs an equivalent constrained optimization problem and approximated linear programs obtained with cutting planes. At each iteration a search direction and a step length are computed. If the step length is considered "non serious", a cutting plane is added and a new search direction is computed. This procedure is repeated until a "serious" step is obtained. When this happens, the search direction is a feasible descent direction of the constrained equivalent problem. To compute the search directions we employ the same formulation as in FDIPA, the Feasible Directions Interior Point Algorithm for constrained optimization. We prove global convergence of the present method. A set of numerical tests is described. The present technique was also successfully applied to the topology optimization of robust trusses. Our results are comparable to those obtained with other well known established methods.
We address the problem of distributed unconstrained convex optimization under separability assumptions, i.e., the framework where each agent of a network is endowed with a local private multidimensional convex cost, i...
详细信息
We address the problem of distributed unconstrained convex optimization under separability assumptions, i.e., the framework where each agent of a network is endowed with a local private multidimensional convex cost, is subject to communication constraints, and wants to collaborate to compute the minimizer of the sum of the local costs. We propose a design methodology that combines average consensus algorithms and separation of time-scales ideas. This strategy is proved, under suitable hypotheses, to be globally convergent to the true minimizer. Intuitively, the procedure lets the agents distributedly compute and sequentially update an approximated Newton-Raphson direction by means of suitable average consensus ratios. We show with numerical simulations that the speed of convergence of this strategy is comparable with alternative optimization strategies such as the Alternating Direction Method of Multipliers. Finally, we propose some alternative strategies which trade-off communication and computational requirements with convergence speed.
Accurate signal recovery from an underdetermined system of linear equation (USLE) is a topic of considerable interest;such as compressed sensing (CS), recovery of low-rank matrix, blind source separation, and related ...
详细信息
Accurate signal recovery from an underdetermined system of linear equation (USLE) is a topic of considerable interest;such as compressed sensing (CS), recovery of low-rank matrix, blind source separation, and related fields. In order to improve the accuracy of signal recovery from an USLE in CS, we develop a new algorithm called composite trigonometric function null-space re-weighted approximate l(0)-norm (CTNRAL0). The proposed algorithm deploys composite trigonometric function as a nonconvex penalty for sparsity which can better approximate l(0)-norm and can yield more accurate solution. To solve nonconvex minimization formulation efficiently, we adopt a gradual nonconvexity method. In addition, the null space measurement matrix is applied in the CTNRAL0 algorithm, which reduces the dimension of the matrix. The proposed algorithm has been compared with the smoothed l(0)-norm and null-space re-weighted approximate l(0)-norm algorithm via numerical simulations to show its improved performance in the noise environment, while computation cost required is comparable. Furthermore, compared with the interior-point LP solvers and iteratively re-weighted least squares algorithm, the proposed algorithm computation cost can reduce by 1 or 2 orders of magnitude.
In this paper, we propose a regularized Newton method for the system of monotone nonlinear equations. The regularization parameter is taken as the norm of the residual, and a correction step with little additional cal...
详细信息
In this paper, we propose a regularized Newton method for the system of monotone nonlinear equations. The regularization parameter is taken as the norm of the residual, and a correction step with little additional calculations is also computed to compensate for the shorter trial step due to the introduction of the regularization parameter. Under the local error bound condition which is weaker than nonsingularity, we show that the new regularized Newton method with correction has quadratic convergence. We also apply the new method to the unconstrained convex optimization problems which may have singular Hessian at the solutions and develop a globally convergent regularized Newton algorithm by using trust region technique. Numerical results show that the algorithm is very efficient and robust.
暂无评论