We consider the minimax setup for the two-armed bandit problem as applied to data processing if there are two alternative processing methods with different a priori unknown efficiencies. One should determine the most ...
详细信息
We consider the minimax setup for the two-armed bandit problem as applied to data processing if there are two alternative processing methods with different a priori unknown efficiencies. One should determine the most efficient method and provide its predominant application. To this end, we use the mirror descent algorithm (MDA). It is well known that the corresponding minimax risk has the order of N-1/2, where N is the amount of processed data, and this bound is order sharp. We propose a batch version of the MDA which allows processing data by packets;this is especially important if parallel data processing can be provided. In this case, the processing time is determined by the number of batches rather than the total amount of data. Unexpectedly, it has turned out that the batch version behaves unlike the ordinary one even if the number of packets is large. Moreover, the batch version provides a considerably lower minimax risk;i.e., it substantially improves the control performance. We explain this result by considering another batch modification of the MDA whose behavior is close to the behavior of the ordinary version and the minimax risk is close as well. Our estimates use invariant descriptions of the algorithms based on Gaussian approximations of income in the batches of data in the domain of "close" distributions and are obtained by Monte-Carlo simulation.
This letter presents an almost sure convergence of the zeroth-order mirrordescent (ZOMD) algorithm. The algorithm admits non-smooth convex functions and a biased oracle which only provides noisy function value at any...
详细信息
This letter presents an almost sure convergence of the zeroth-order mirrordescent (ZOMD) algorithm. The algorithm admits non-smooth convex functions and a biased oracle which only provides noisy function value at any desired point. We approximate the subgradient of the objective function using Nesterov's Gaussian Approximation (NGA) with certain alternations suggested by some practical applications. We prove an almost sure convergence of the iterates' function value to the neighbourhood of optimal function value, which can not be made arbitrarily small, a manifestation of a biased oracle. This letter ends with a concentration inequality, which is a finite time analysis that predicts the likelihood that the function value of the iterates is in the neighbourhood of the optimal value at any finite iteration.
The application of the mirror descent algorithm (MDA) in the one-armed bandit problem in the minimax setting in relation to data processing has been considered. This problem has also been known as a game with nature, ...
详细信息
The application of the mirror descent algorithm (MDA) in the one-armed bandit problem in the minimax setting in relation to data processing has been considered. This problem has also been known as a game with nature, in which the payoff function of the player is the mathematical expectation of the total income. The player must determine the most effective method of the two available ones during the control process and ensure its preferential use. In this case, the a priori efficiency of one of the methods is known. In this paper, a modification of the MDA that makes it possible to improve the control efficiency by using additional information has been considered. The proposed strategy preserves the characteristic property of strategies for one-armed bandits: if a known action is applied once, it will be applied until the end of control. Modifications for the algorithm for single processing and for its batch version have been considered. Batch processing is interesting in that the total processing time is determined by the number of packets, and not by the original amount of data, with the possibility of providing parallel processing of data in packets. For the proposed algorithms, the optimal values of the adjustable parameters have been calculated using Monte Carlo simulation and minimax risk estimates have been obtained.
This letter investigates the convergence and concentration properties of the Stochastic mirrordescent (SMD) algorithm utilizing biased stochastic subgradients. We establish the almost sure convergence of the algorith...
详细信息
This letter investigates the convergence and concentration properties of the Stochastic mirrordescent (SMD) algorithm utilizing biased stochastic subgradients. We establish the almost sure convergence of the algorithm's iterates under the assumption of diminishing bias. Furthermore, we derive concentration bounds for the discrepancy between the iterates' function values and the optimal value, based on standard assumptions. Subsequently, leveraging the assumption of Sub-Gaussian noise in stochastic subgradients, we present refined concentration bounds for this discrepancy.
We propose some modified versions of the mirror descent algorithm for the two-armed bandit problem which allow parallel processing of data. Using Monte-Carlo simulations, we estimate the minimax risk for this versions.
ISBN:
(纸本)9781509009732
We propose some modified versions of the mirror descent algorithm for the two-armed bandit problem which allow parallel processing of data. Using Monte-Carlo simulations, we estimate the minimax risk for this versions.
An important issue in reinforcement learning is to make the agent avoid the dangers and risks during the task such as physical collisions. We propose the reinforcement learning algorithm based on the Comirror algorith...
详细信息
An important issue in reinforcement learning is to make the agent avoid the dangers and risks during the task such as physical collisions. We propose the reinforcement learning algorithm based on the Comirroralgorithm, named CoMDS, for the problem that has a functional constraint. Besides, we modify the proposed algorithm CoMDS to Gaussian CoMDS for practical use. We evaluate our algorithms with the via -point task of a planar robotic arm with a forbidden area, that employs as a constraint, in the simulator. As a result, we find that Gaussian CoMDS explores the policy while satisfying the constraint.
We generalize stochastic subgradient descent methods to situations in which we do not receive independent samples from the distribution over which we optimize, instead receiving samples coupled over time. We show that...
详细信息
We generalize stochastic subgradient descent methods to situations in which we do not receive independent samples from the distribution over which we optimize, instead receiving samples coupled over time. We show that as long as the source of randomness is suitably ergodic it converges quickly enough to a stationary distribution-the method enjoys strong convergence guarantees, both in expectation and with high probability. This result has implications for stochastic optimization in high-dimensional spaces, peer-to-peer distributed optimization schemes, decision problems with dependent data, and stochastic optimization problems over combinatorial spaces.
The main goal of this paper is to develop accuracy estimates for stochastic programming problems by employing stochastic approximation (SA) type algorithms. To this end we show that while running a mirrordescent Stoc...
详细信息
The main goal of this paper is to develop accuracy estimates for stochastic programming problems by employing stochastic approximation (SA) type algorithms. To this end we show that while running a mirrordescent Stochastic Approximation procedure one can compute, with a small additional effort, lower and upper statistical bounds for the optimal objective value. We demonstrate that for a certain class of convex stochastic programs these bounds are comparable in quality with similar bounds computed by the sample average approximation method, while their computational cost is considerably smaller.
We study stochastic convex optimization under infinite noise variance. Specifically, when the stochastic gradient is unbiased and has uniformly bounded (1+ k)-th moment, for some k is an element of (0;1], we quantify ...
详细信息
We study stochastic convex optimization under infinite noise variance. Specifically, when the stochastic gradient is unbiased and has uniformly bounded (1+ k)-th moment, for some k is an element of (0;1], we quantify the convergence rate of the Stochastic mirror descent algorithm with a particular class of uniformly convex mirror maps, in terms of the number of iterations, dimensionality and related geometric parameters of the optimization problem. Interestingly this algorithm does not require any explicit gradient clipping or normalization, which have been extensively used in several recent empirical and theoretical works. We complement our convergence results with information-theoretic lower bounds showing that no other algorithm using only stochastic first-order oracles can achieve improved rates. Our results have several interesting consequences for devising online/streaming stochastic approximation algorithms for problems arising in robust statistics and machine learning.
This paper is concerned with the constrained distributed multi-agent convex optimization problem over a time-varying network. We assume that the bit rate of the considered communication is limited, such that a uniform...
详细信息
ISBN:
(纸本)9781665440899
This paper is concerned with the constrained distributed multi-agent convex optimization problem over a time-varying network. We assume that the bit rate of the considered communication is limited, such that a uniform quantizer is applied in the process of exchanging information over the multi-agent network. Then a quantizer-based distributed mirrordescent (QDMD) algorithm, which utilizes the Bregman divergence as the distance-measuring function, is developed for such optimization problem. The convergence result of the developed algorithm is also analyzed. By choosing the iteration step-size eta(t) = lambda/root t and quantization interval upsilon(t) = lambda/t with a prescribed parameter lambda, it is shown that the convergence rate of the QDMD algorithm can achieve O(1/root T), where T is the number of iterations.
暂无评论