In the model of local computation algorithms (LCAs), we aim to compute the queried part of the output by examining only a small (sublinear) portion of the input. Many recently developed LCAs on graph problems achieve ...
详细信息
In the model of local computation algorithms (LCAs), we aim to compute the queried part of the output by examining only a small (sublinear) portion of the input. Many recently developed LCAs on graph problems achieve time and space complexities with very low dependence on n, the number of vertices. Nonetheless, these complexities are generally at least exponential in d, the upper bound on the degree of the input graph. Instead, we consider the case where parameter d can be moderately dependent on n, and aim for complexities with subexponential dependence on d, while maintaining polylogarithmic dependence on n. We present: a randomized LCA for computing maximal independent sets whose time and space complexities are quasi-polynomial in d and polylogarithmic in n;for constant , a randomized LCA that provides a -approximation to maximum matching with high probability, whose time and space complexities are polynomial in d and polylogarithmic in n.
Repetitive Scenario Design (RSD) is a randomized approach to robust design based on iterating two phases: a standard scenario design phase that uses N scenarios (design samples), followed by randomized feasibility pha...
详细信息
Repetitive Scenario Design (RSD) is a randomized approach to robust design based on iterating two phases: a standard scenario design phase that uses N scenarios (design samples), followed by randomized feasibility phase that uses N-o test samples on the scenario solution. We give a full and exact probabilistic characterization of the number of iterations required by the RSD approach for returning a solution, as a function of N, N-o, and of the desired levels of probabilistic robustness in the solution. This novel approach broadens the applicability of the scenario technology, since the user is now presented with a clear tradeoff between the number N of design samples and the ensuing expected number of repetitions required by the RSD algorithm. The plain (one-shot) scenario design becomes just one of the possibilities, sitting at one extreme of the tradeoff curve, in which one insists in finding a solution in a single repetition: this comes at the cost of possibly high N. Other possibilities along the tradeoff curve use lower N values, but possibly require more than one repetition.
This paper is concerned with the problem of locating a facility on a line in the presence of strategic agents, also located on that line. Each agent incurs a cost equal to her distance to the facility whereas the plan...
详细信息
This paper is concerned with the problem of locating a facility on a line in the presence of strategic agents, also located on that line. Each agent incurs a cost equal to her distance to the facility whereas the planner wishes to minimize the L-p norm of the vector of agent costs. The location of each agent is only privately known, and the goal is to design a strategyproof mechanism that approximates the optimal cost well. It is shown that the median mechanism provides a 2(1-1/p) approximation ratio, and that this is the optimal approximation ratio among all deterministic strategyproof mechanisms. For randomized mechanisms, two results are shown: First, for any integer p larger than 2, no mechanism-from a rather large class of randomized mechanisms-has an approximation ratio better than that of the median mechanism. This is in contrast to the case of p = 2 and p = 1 where a randomized mechanism provably helps improve the worst case approximation ratio. Second, for the case of 2 agents, the Left-Right-Middle (LRM) mechanism, first designed by Procaccia and Tennenholtz for the special case of infinity norm, provides the optimal approximation ratio among all randomized mechanisms.
This paper contributes to the development of randomized methods for neural networks. The proposed learner model is generated incrementally by stochastic configuration (SC) algorithms, termed SC networks (SCNs). In con...
详细信息
This paper contributes to the development of randomized methods for neural networks. The proposed learner model is generated incrementally by stochastic configuration (SC) algorithms, termed SC networks (SCNs). In contrast to the existing randomized learning algorithms for single layer feed-forward networks, we randomly assign the input weights and biases of the hidden nodes in the light of a supervisory mechanism, and the output weights are analytically evaluated in either a constructive or selective manner. As fundamentals of SCN-based data modeling techniques, we establish some theoretical results on the universal approximation property. Three versions of SC algorithms are presented for data regression and classification problems in this paper. Simulation results concerning both data regression and classification indicate some remarkable merits of our proposed SCNs in terms of less human intervention on the network size setting, the scope adaptation of random parameters, fast learning, and sound generalization.
We consider a generic convex optimization problem associated with regularized empirical risk minimization of linear predictors. The problem structure allows us to reformulate it as a convex-concave saddle point proble...
详细信息
We consider a generic convex optimization problem associated with regularized empirical risk minimization of linear predictors. The problem structure allows us to reformulate it as a convex-concave saddle point problem. We propose a stochastic primal-dual coordinate (SPDC) method, which alternates between maximizing over a randomly chosen dual variable and minimizing over the primal variables. An extrapolation step on the primal variables is performed to obtain accelerated convergence rate. We also develop a minibatch version of the SPDC method which facilitates parallel computing, and an extension with weighted sampling probabilities on the dual variables, which has a better complexity than uniform sampling on unnormalized data. Both theoretically and empirically, we show that the SPDC method has comparable or better performance than several state-of-the-art optimization methods.
Neural networks have been widely used as predictive models to fit data distribution, and they could be implemented through learning a collection of samples. In many applications, however, the given dataset may contain...
详细信息
Neural networks have been widely used as predictive models to fit data distribution, and they could be implemented through learning a collection of samples. In many applications, however, the given dataset may contain noisy samples or outliers which may result in a poor learner model in terms of generalization. This paper contributes to a development of robust stochastic configuration networks (RSCNs) for resolving uncertain data regression problems. RSCNs are built on original stochastic configuration networks with weighted least squares method for evaluating the output weights, and the input weights and biases are incrementally and randomly generated by satisfying with a set of inequality constrains. The kernel density estimation (KDE) method is employed to set the penalty weights for each training samples, so that some negative impacts, caused by noisy data or outliers, on the resulting learner model can be reduced. The alternating optimization technique is applied for updating a RSCN model with improved penalty weights computed from the kernel density estimation function. Performance evaluation is carried out by a function approximation, four benchmark datasets and a case study on engineering application. Comparisons to other robust randomised neural modelling techniques, including the probabilistic robust learning algorithm for neural networks with random weights and improved RVFL networks, indicate that the proposed RSCNs with KDE perform favourably and demonstrate good potential for real-world applications. (C) 2017 Elsevier Inc. All rights reserved.
Let be a family of graphs. Given an n-vertex input graph G and a positive integer k, testing whether G has a vertex subset S of size at most k, such that belongs to , is a prototype vertex deletion problem. These type...
详细信息
Let be a family of graphs. Given an n-vertex input graph G and a positive integer k, testing whether G has a vertex subset S of size at most k, such that belongs to , is a prototype vertex deletion problem. These type of problems have attracted a lot of attention in recent times in the domain of parameterized complexity. In this paper, we study two such problems;when is either the family of forests of cacti or the family of forests of odd-cacti. A graph H is called a forest of cacti if every pair of cycles in H intersect on at most one vertex. Furthermore, a forest of cacti H is called a forest of odd cacti, if every cycle of H is of odd length. Let us denote by and , the families of forests of cacti and forests of odd cacti, respectively. The vertex deletion problems corresponding to and are called Diamond Hitting Set and Even Cycle Transversal, respectively. In this paper we design randomized algorithms with worst case run time for both these problems. Our algorithms considerably improve the running time for Diamond Hitting Set and Even Cycle Transversal, compared to what is known about them.
We show how to compute the permanent of an n x n integer matrix modulo p(k) in time n(k+ O(1)) if p = 2 and in time 2(n)/exp{Omega(gamma(2)n/p logp)} if p is an odd prime with kp 0 we can compute the permanent of an ...
详细信息
We show how to compute the permanent of an n x n integer matrix modulo p(k) in time n(k+ O(1)) if p = 2 and in time 2(n)/exp{Omega(gamma(2)n/p logp)} if p is an odd prime with kp < n, where gamma = 1-kp/n. Our algorithms are based on Ryser's formula, a randomized algorithm of Bax and Franklin, and exponential-space tabulation. Using the Chinese remainder theorem, we conclude that for each delta > 0 we can compute the permanent of an n x n integer matrix in time 2n/ exp{Omega(delta(2)n/beta(1/(1-delta)) log beta)}, provided there exists a real number beta such that vertical bar per A vertical bar <= beta(n) and beta <= (1/44 delta n)(1-delta) (C) 2017 Elsevier B.V. All rights reserved.
We show that approximating the second eigenvalue of stochastic operators is BPL-complete, thus giving a natural problem complete for this class. We also show that approximating any eigenvalue of a stochastic and Hermi...
详细信息
We show that approximating the second eigenvalue of stochastic operators is BPL-complete, thus giving a natural problem complete for this class. We also show that approximating any eigenvalue of a stochastic and Hermitian operator with constant accuracy can be done in BPL. This work together with related work on the subject reveal a picture where the various space-bounded classes (e.g., probabilistic logspace, quantum logspace and the class DET) can be characterized by algebraic problems (such as approximating the spectral gap) where, roughly speaking, the difference between the classes lies in the kind of operators they can handle (e.g., stochastic, Hermitian or arbitrary).
In this article we suggest a randomized algorithm for the LQR (Linear Quadratic Regulator) optimal-control problem via static-output-feedback. The suggested algorithm is based on the recently introduced randomized opt...
详细信息
In this article we suggest a randomized algorithm for the LQR (Linear Quadratic Regulator) optimal-control problem via static-output-feedback. The suggested algorithm is based on the recently introduced randomized optimization method called the Ray-Shooting Method that efficiently solves the global minimization problem of continuous functions over compact non-convex unconnected regions. The algorithm presented here is a randomized algorithm with a proof of convergence in probability. Its practical implementation has good performance in terms of the quality of controllers obtained and the percentage of success.
暂无评论