This paper introduces LSEMINK, an effective modified Newton-Krylov algorithm geared toward minimizing the log-sum-exp function for a linear model. Problems of this kind arise commonly, for example, in geometric progra...
详细信息
This paper introduces LSEMINK, an effective modified Newton-Krylov algorithm geared toward minimizing the log-sum-exp function for a linear model. Problems of this kind arise commonly, for example, in geometric programming and multinomial logistic regression. Although the log-sum-exp function is smooth and convex, standard line-search Newton-type methods can become inefficient because the quadratic approximation of the objective function can be unbounded from below. To circumvent this, LSEMINK modifies the Hessian by adding a shift in the row space of the linear model. We show that the shift renders the quadratic approximation to be bounded from below and that the overall scheme converges to a global minimizer under mild assumptions. Our convergence proof also shows that all iterates are in the row space of the linear model, which can be attractive when the model parameters do not have an intuitive meaning, as is common in machine learning. Since LSEMINK uses a Krylov subspace method to compute the search direction, it only requires matrix-vector products with the linear model, which is critical for large-scale problems. Our numerical experiments on image classification and geometric programming illustrate that LSEMINK considerably reduces the time-to-solution and increases the scalability compared to geometric programming and natural gradient descent approaches. It has significantly faster initial convergence than standard Newton-Krylov methods, which is particularly attractive in applications like machine learning. In addition, LSEMINK is more robust to ill-conditioning arising from the nonsmoothness of the problem. We share our MATLAB implementation at a GitHub repository (https://***/KelvinKan/LSEMINK).
This paper presents a pattern synthesis algorithm achieving minimum mainlobe width via sparse optimization. The problem of synthesizing a pattern with minimum mainlobe width is formulated as a sparse optimization prob...
详细信息
This paper presents a pattern synthesis algorithm achieving minimum mainlobe width via sparse optimization. The problem of synthesizing a pattern with minimum mainlobe width is formulated as a sparse optimization problem with l0 norm by introducing a slack variable. To solve the sparse optimization problem, three existing relaxations for l0 norm are presented. Moreover, a novel log-sum-exp penalty function is proposed to replace l0 norm to be minimized in this paper, leading to a convex problem which can be solved directly and do not need to solve a sequence of sparse optimization problems compared with the iterative reweighted l1 norm. Both the focused beam pattern and shaped beam pattern for arbitrary arrays can be synthesized utilizing the proposed algorithm. Besides, the mainlobe width of pattern synthesized by the proposed algorithm is minimum. Simultaneously, it brings an extra advantage that it is no longer necessary to accurately determine the mainlobe region without synthesis performance degradation. Numerical examples are presented to verify the effectiveness and superiority of the proposed algorithm.(c) 2022 Elsevier Inc. All rights reserved.
暂无评论