In this paper, we study zeroth -order algorithms for nonconvex-concave minimax problems, which have attracted much attention in machine learning, signal processing, and many other fields in recent years. We propose a ...
详细信息
In this paper, we study zeroth -order algorithms for nonconvex-concave minimax problems, which have attracted much attention in machine learning, signal processing, and many other fields in recent years. We propose a zeroth -order alternatingrandomizedgradient projection (ZO-AGP) algorithm for smooth nonconvex-concave minimax problems;its iteration complexity to obtain an \varepsilon -stationary point is bounded by O( \varepsilon - 4 ), and the number of function value estimates is bounded by O( d x + d y ) per iteration. Moreover, we propose a zeroth -order block alternating randomized proximal gradient algorithm (ZO-BAPG) for solving blockwise nonsmooth nonconvexconcave minimax optimization problems;its iteration complexity to obtain an \varepsilon -stationary point is bounded by O( \varepsilon - 4 ), and the number of function value estimates per iteration is bounded by O( Kd x + d y ). To the best of our knowledge, this is the first time zeroth -order algorithms with iteration complexity guarantee are developed for solving both general smooth and blockwise nonsmooth nonconvex-concave minimax problems. Numerical results on the data poisoning attack problem and the distributed nonconvex sparse principal component analysis problem validate the efficiency of the proposed algorithms.
Nonconvex minimax problems have attracted significant interest in machine learning and many other fields in recent years. In this paper, we propose a new zeroth-order alternatingrandomizedgradient projection algorit...
详细信息
Nonconvex minimax problems have attracted significant interest in machine learning and many other fields in recent years. In this paper, we propose a new zeroth-order alternatingrandomizedgradient projection algorithm to solve smooth nonconvex-linear problems and its iteration complexity to find an e-first-order Nash equilibrium is O(epsilon(-3)) and the number of function value estimation per iteration is bounded by O(d(x)epsilon(-2)). Furthermore, we propose a zeroth-order alternating randomized proximal gradient algorithm for block-wise nonsmooth nonconvex-linear minimax problems and its corresponding iteration complexity is O(K-3/2 epsilon(-3)) and the number of function value estimation is bounded by O(d(x) epsilon(-2))per iteration. The numerical results indicate the efficiency of the proposed algorithms.
暂无评论