In this work, we conduct the first systematic study of stochastic variational inequality (SVI) and stochastic saddle point (SSP) problems under the constraint of differential privacy (DP). We propose two algorithms: N...
详细信息
In this work, we conduct the first systematic study of stochastic variational inequality (SVI) and stochastic saddle point (SSP) problems under the constraint of differential privacy (DP). We propose two algorithms: Noisy stochastic Extragradient (NSEG) and Noisy Inexact stochastic Proximal Point (NISPP). We show that a stochastic approximation variant of these algorithms attains risk bounds vanishing as a function of the dataset size, with respect to the strong gap function;and a sampling with replacement variant achieves optimal risk bounds with respect to a weak gap function. We also show lower bounds of the same order on weak gap function. Hence, our algorithms are optimal. Key to our analysis is the investigation of algorithmic stability bounds, both of which are new even in the nonprivate case. The dependence of the running time of the sampling with replacement algorithms, with respect to the dataset size n, is n(2) for NSEG and O (n(3/2)) for NISPP.
This paper studies the stochastic behavior of the LMS and NLMS algorithms for a system identification framework when the input signal is a cyclostationary white Gaussian process. The input cyclostationary signal is mo...
详细信息
This paper studies the stochastic behavior of the LMS and NLMS algorithms for a system identification framework when the input signal is a cyclostationary white Gaussian process. The input cyclostationary signal is modeled by a white Gaussian random process with periodically time-varying power. Mathematical models are derived for the mean and mean-square-deviation (MSD) behavior of the adaptive weights with the input cyclostationarity. These models are also applied to the non-stationary system with a random walk variation of the optimal weights. Monte Carlo simulations of the two algorithms provide strong support for the theory. Finally, the performance of the two algorithms is compared for a variety of scenarios.
In the one-dimensional bin packing problem a list ofn items has to be packed into a minimum number of unit-capacity bins. A class of linear online algorithms for the approximate solution of bin packing with items draw...
详细信息
In the one-dimensional bin packing problem a list ofn items has to be packed into a minimum number of unit-capacity bins. A class of linear online algorithms for the approximate solution of bin packing with items drawn from a known probability distribution is presented. Each algorithm depends on the distribution and on a parameter controlling the performance of the algorithm. It is shown that with increasing number of items the expected performance ratio has an arbitrary small deviation from optimum.
This paper studies the stochastic behavior of the LMS algorithm in a system identification framework for a cyclostationary colored input without assuming a Gaussian distribution for the input. The input cyclostationar...
详细信息
This paper studies the stochastic behavior of the LMS algorithm in a system identification framework for a cyclostationary colored input without assuming a Gaussian distribution for the input. The input cyclostationary signal is modeled by a colored random process with periodically time-varying power. The generation of the colored non-Gaussian random process is parametrized in novel manner by passing a Gaussian random process through a coloring filter followed by a zero memory nonlinearity. The unknown system parameters are fixed in most of the cases studied here. Mathematical models are derived for the behavior of the mean and mean-square-deviation (MSD) and the excess mean-square error (EMSE) of the adaptive weights as a function of the input cyclostationarity. The models display the dependence of the algorithm upon the input nonlinearity and coloration. Three nonlinearities that are studied in detail with Monte Carlo simulations provide strong support for the theory. (C) 2019 Published by Elsevier Inc.
This paper studies the stochastic behavior of the least mean fourth (LMF) algorithm for a system identification framework when the input signal is a non-stationary white Gaussian process. The unknown system is modeled...
详细信息
This paper studies the stochastic behavior of the least mean fourth (LMF) algorithm for a system identification framework when the input signal is a non-stationary white Gaussian process. The unknown system is modeled by the standard random-walk model. A theory is developed which is based upon the instantaneous average power and the instantaneous average squared power in the adaptive filter taps. A recursion is derived for the instantaneous mean square deviation of the LMF algorithm. This recursion yields interesting results about the transient and steady-state behaviors of the algorithm with time-varying input power. The theory is supported by Monte Carlo simulations for sinusoidal input power variations.
This paper studies the stochastic behavior of the recursive least squares (RLS) algorithm in a system identification framework for a cyclostationary colored input. The input cyclostationary signal is modeled by a colo...
详细信息
This paper studies the stochastic behavior of the recursive least squares (RLS) algorithm in a system identification framework for a cyclostationary colored input. The input cyclostationary signal is modeled by a colored random process with periodically time-varying power. The system parameters vary according to a random-walk. Mathematical models are derived for the mean and mean-square-deviation (MSD) behavior of the adaptive weights as a function of the input cyclostationarity. The MSD behaviors of the RLS and LMS algorithms are compared for cyclostationary colored input. Monte Carlo simulations provide strong support for the theory. A separate analysis for white Gaussian and non-Gaussian inputs is presented in support of the assumptions made for the mathematical model above.
In this paper, we study on a stochastic subgradient algorithm for the finite-sum optimization problems where the functions are not necessarily convex and smooth and we use a weak subgradient of only one function at ea...
详细信息
In this paper, we study on a stochastic subgradient algorithm for the finite-sum optimization problems where the functions are not necessarily convex and smooth and we use a weak subgradient of only one function at each iteration. We then analyze the convergence properties of the stochastic weak subgradient algorithm (SWSA). In addition, we focus on the problem that arises in the semi-supervised machine learning (SSML) problem where all the functions in the problem are not smooth and convex, and we propose an algorithm called WS-SSML to compute a weak subgradient of these functions in SSML. Finally, we solve the SSML problem using the data sets from the literature and we compare our results. We can conclude that SWSA with WS-SSML works well in practice.
The effects of saturation-type nonlinearities on the input and the error in the weight update equation for LMS adaptation are investigated for a stationary white Gaussian data model for system identification. Nonlinea...
详细信息
The effects of saturation-type nonlinearities on the input and the error in the weight update equation for LMS adaptation are investigated for a stationary white Gaussian data model for system identification. Nonlinear recursions are derived for the transient and steady-state weight first and second moments that include the effect of soft limiters on both the input and the error driving the algorithm. By varying a single parameter of the soft limiter, a general theory is presented that is applicable to LMS, soft limiting of the input, error or both and sign-sign LMS. (C) 2017 Elsevier B.V. All rights reserved.
Sparse signal recovery arises from many applications. However, deterministic algorithms often require significant time, especially for large-scale systems. Hence, stochastic algorithms like the stochastic Iterative Ha...
详细信息
Sparse signal recovery arises from many applications. However, deterministic algorithms often require significant time, especially for large-scale systems. Hence, stochastic algorithms like the stochastic Iterative Hard Thresholding Algorithm (StoIHT) were proposed to address large-scale problems. In this letter, we propose using the stochastic Polyak step size method to design step sizes and provide theoretical convergence analysis. Experimental results suggest that our algorithm demonstrates comparable performance to other stochastic algorithms in sparse signal recovery and image reconstruction with faster convergence.
Functional constrained optimization is becoming more and more important in machine learning and operations research. Such problems have potential applications in risk-averse machine learning, semisupervised learning a...
详细信息
Functional constrained optimization is becoming more and more important in machine learning and operations research. Such problems have potential applications in risk-averse machine learning, semisupervised learning and robust optimization among others. In this paper, we first present a novel Constraint Extrapolation (ConEx) method for solving convex functional constrained problems, which utilizes linear approximations of the constraint functions to define the extrapolation (or acceleration) step. We show that this method is a unified algorithm that achieves the best-known rate of convergence for solving different functional constrained convex composite problems, including convex or strongly convex, and smooth or nonsmooth problems with stochastic objective and/or stochastic constraints. Many of these rates of convergence were in fact obtained for the first time in the literature. In addition, ConEx is a single-loop algorithm that does not involve any penalty subproblems. Contrary to existing primal-dual methods, it does not require the projection of Lagrangian multipliers into a (possibly unknown) bounded set. Second, for nonconvex functional constrained problems, we introduce a new proximal point method which transforms the initial nonconvex problem into a sequence of convex problems by adding quadratic terms to both the objective and constraints. Under certain MFCQ-type assumption, we establish the convergence and rate of convergence of this method to KKT points when the convex subproblems are solved exactly or inexactly. For large-scale and stochastic problems, we present a more practical proximal point method in which the approximate solutions of the subproblems are computed by the aforementioned ConEx method. Under a strong feasibility assumption, we establish the total iteration complexity of ConEx required by this inexact proximal point method for a variety of problem settings, including nonconvex smooth or nonsmooth problems with stochastic objective and/or
暂无评论