In this paper, we consider the online version of the following problem: partition a set of input points into subsets, each enclosable by a unit ball, so as to minimize the number of subsets used. In the one-dimensiona...
详细信息
ISBN:
(纸本)9783540695134
In this paper, we consider the online version of the following problem: partition a set of input points into subsets, each enclosable by a unit ball, so as to minimize the number of subsets used. In the one-dimensional case, we show that surprisingly the na < ve upper bound of 2 on the competitive ratio can be beaten: we present a new randomized 15/8-competitive online algorithm. We also provide some lower bounds and an extension to higher dimensions.
A distributed consensus algorithm allows n processes to reach acommon decision value starting from individual inputs. Wait-free consensus, in which a process always terminates within a finite number of its own steps, ...
详细信息
ISBN:
(纸本)9781595936318
A distributed consensus algorithm allows n processes to reach acommon decision value starting from individual inputs. Wait-free consensus, in which a process always terminates within a finite number of its own steps, is impossible in anasynchronous shared-memory system. However, consensus becomes solvable using randomization when a process only has to terminatewith probability 1. randomized consensus algorithms are typically evaluated by their total step complexity, which is the expected total number of steps taken by all *** work proves that the total step complexity of randomized consensus is Θ(n2) in an asynchronous shared memory systemusing multi-writer multi-reader registers. The bound is achieved by improving both the lower and the upper bounds for this *** addition to improving upon the best previously known result bya factor of log2 n, the lower bound features agreatly streamlined proof. Both goals are achieved through restricting attention to a set of layered executions andusing an isoperimetric inequality for analyzing their *** matching algorithm decreases the expected total step complexity by a log n factor, by leveraging themulti-writing capability of the shared registers. Its correctness proof is facilitated by viewing each execution of the algorithmas a stochastic process and applying Kolmogorov's inequality.
Query plans are compared according to multiple cost metrics in multi-objective query optimization. The goal is to find the set of Pareto plans realizing optimal cost tradeoffs for a given query. So far, only algorithm...
详细信息
ISBN:
(纸本)9781450335317
Query plans are compared according to multiple cost metrics in multi-objective query optimization. The goal is to find the set of Pareto plans realizing optimal cost tradeoffs for a given query. So far, only algorithms with exponential complexity in the number of query tables have been proposed for multi objective query optimization. In this work, we present the first algorithm with polynomial complexity in the query size. Our algorithm is randomized and iterative. It improves query plans via a multi-objective version of hill climbing that applies multiple transformations in each climbing step for maximal efficiency. Based on a locally optimal plan, we approximate the Pareto plan set within the restricted space of plans with similar join orders. We maintain a cache of Pareto-optimal plans for each potentially useful intermediate result to share partial plans that were discovered in different iterations. We show that each iteration of our algorithm performs in expected polynomial time based on an analysis of the expected path length between a random plan and local optima reached by hill climbing. We experimentally show that our algorithm can optimize queries with hundreds of tables and outperforms other randomized algorithms such as the NSGA-II genetic algorithm over a wide range of scenarios.
Orthogonal Matching Pursuit (OMP) can denoise a signal by greedily approximating a least-squares (LS) estimate as a linear combination of elements (atoms) of a dictionary. OMP iteratively decomposes a signal through d...
详细信息
ISBN:
(纸本)9781479928934
Orthogonal Matching Pursuit (OMP) can denoise a signal by greedily approximating a least-squares (LS) estimate as a linear combination of elements (atoms) of a dictionary. OMP iteratively decomposes a signal through deterministic atom selections at each iteration step. Recently proposed randomized OMP algorithms employ random atom selections instead and have the potential to further improve denoising. Typically, the best approximation from these algorithms can be obtained only within a narrow range of iterations. In this paper, we propose a novel multi-stage randomized OMP (MS-ROMP) denoising approach that performs successive ROMP runs, each denoising the obtained estimate from the previous one. We show through simulations that, under certain conditions, this can significantly improve denoising performance by producing a good approximation after any number of iterations beyond the sparsity level.
We consider the problem of balancing load items (tokens) on networks. Starting with an arbitrary load distribution, we allow in each round nodes to exchange tokens with their neighbors. The goal is to achieve a distri...
详细信息
ISBN:
(纸本)9781467343831
We consider the problem of balancing load items (tokens) on networks. Starting with an arbitrary load distribution, we allow in each round nodes to exchange tokens with their neighbors. The goal is to achieve a distribution where all nodes have nearly the same number of tokens. For the continuous case where tokens are arbitrarily divisible, most load balancing schemes correspond to Markov chains whose convergence is fairly well-understood in terms of their spectral gap. However, in many applications load items cannot be divided arbitrarily and we need to deal with the discrete case where the load is composed of indivisible tokens. This discretization entails a non-linear behavior due to its rounding errors, which makes the analysis much harder than in the continuous case. Therefore, it has been a major open problem to understand the limitations of discrete load balancing and its relation to the continuous case. We investigate several randomized protocols for different communication models in the discrete case. Our results demonstrate that there is almost no difference between the discrete and continuous case. For instance, for any regular network in the matching model, all nodes have the same load up to an additive constant in (asymptotically) the same number of rounds required in the continuous case. This generalizes and tightens the previous best result, which only holds for expander graphs.
Current deep learning architectures are growing larger in order to learn from complex datasets. These architectures require giant matrix multiplication operations to train millions of parameters. Conversely, there is ...
详细信息
ISBN:
(纸本)9781450348874
Current deep learning architectures are growing larger in order to learn from complex datasets. These architectures require giant matrix multiplication operations to train millions of parameters. Conversely, there is another growing trend to bring deep learning to low-power, embedded devices. The matrix operations, associated with the training and testing of deep networks, are very expensive from a computational and energy standpoint. We present a novel hashing-based technique to drastically reduce the amount of computation needed to train and test neural networks. Our approach combines two recent ideas, Adaptive Dropout and randomized Hashing for Maximum Inner Product Search (MIPS), to select the nodes with the highest activations efficiently. Our new algorithm for deep learning reduces the overall computational cost of the forward and backward propagation steps by operating on significantly fewer nodes. As a consequence, our algorithm uses only 5% of the total multiplications, while keeping within 1% of the accuracy of the original model on average. A unique property of the proposed hashing-based back-propagation is that the updates are always sparse. Due to the sparse gradient updates, our algorithm is ideally suited for asynchronous, parallel training, leading to near-linear speedup, as the number of cores increases. We demonstrate the scalability and sustainability (energy efficiency) of our proposed algorithm via rigorous experimental evaluations on several datasets.
This paper discusses an application of randomized algorithms for matrix factorization to the classic Kalman filtering technique to estimate the state of a linear dynamical system. We consider the case when the state s...
详细信息
ISBN:
(纸本)9781509059928
This paper discusses an application of randomized algorithms for matrix factorization to the classic Kalman filtering technique to estimate the state of a linear dynamical system. We consider the case when the state space is high dimensional leading to a high computational complexity in evaluating the state estimate and the estimation error covariance. We formalize two approaches based on the use of randomized matrix factorization - the first based on a singular value decomposition approach to Kalman filtering and the second based on approximating the prediction step using a randomized approach. We provide an analytic lower bound in the positive semidefinite sense on the estimation error covariance matrix for the first approach, and a lower and an upper bound for the same in the second approach, all of which hold with high probability. Finally, we provide numerical evidence validating the analytic results and also provide insight into the computational gain in the use of the two approaches on synthetically generated data.
Combinatorial auctions, where buyers can bid on bundles of items rather than bidding them sequentially, often lead to more economically efficient allocations of financial resources. However, the problem of determining...
详细信息
ISBN:
(纸本)9781424427079
Combinatorial auctions, where buyers can bid on bundles of items rather than bidding them sequentially, often lead to more economically efficient allocations of financial resources. However, the problem of determining the winners once the bids are submitted, the so-called Winner Determination Problem (WDP), is known to be NP hard. We present two randomized algorithms to solve this combinatorial optimization problem. The first is based on the Cross-Entropy (CE) method, a versatile adaptive algorithm that has been successfully applied to solve various well-known difficult combinatorial optimization problems. The other is a new adaptive simulation approach by Botev and Kroese, which evolved from the CE method and combines the adaptiveness and level-crossing ideas of CE with Markov Chain Monte Carlo techniques. The performance of the proposed algorithms are illustrated by various examples.
Given a continuous scalar field f : X -> R where X is a topological space, a level set of f is a set {x is an element of X : f(x) = alpha} for some value alpha is an element of R. The level sets of f can be subdivi...
详细信息
ISBN:
(纸本)9781450300162
Given a continuous scalar field f : X -> R where X is a topological space, a level set of f is a set {x is an element of X : f(x) = alpha} for some value alpha is an element of R. The level sets of f can be subdivided into connected components. As alpha changes continuously, the connected components in the level sets appear, disappear, split and merge. The Reeb graph of f encodes these changes in connected components of level sets. It provides a simple yet meaningful abstraction of the input domain. As such, it has been used in a range of applications in fields such as graphics and scientific visualization. In this paper, we present the first sub-quadratic algorithm to compute the Reeb graph for a function on an arbitrary simplicial complex K. Our algorithm is randomized with an expected running time 0(m log n), where m is the size of the 2-skeleton of K (i.e, total number of vertices, edges and triangles), and n is the number of vertices. This presents a significant improvement over the previous Theta(mn) time complexity for arbitrary complex, matches (although in expectation only) the best known result for the special case of 2-manifolds, and is faster than current algorithms for any other special cases (e.g, 3-manifolds). Our algorithm is also very simple to implement. Preliminary experimental results show that it performs well in practice.
This paper presents a new randomized algorithm for achieving consensus among asynchronous processes that communicate by reading and writing shared registers, in the presence of a strong adversary. The fastest previous...
详细信息
ISBN:
(纸本)9781595939890
This paper presents a new randomized algorithm for achieving consensus among asynchronous processes that communicate by reading and writing shared registers, in the presence of a strong adversary. The fastest previously known algorithm requires a process to perform an expected O(n log(2) n) read and write operations in the worst case. In our algorithm, each process executes at most an expected O(n log n) read and write operations. It, is shown that shared-coin algorithms can be combined together to yield an algorithm with O(n log n) individual work and O(n(2)) total work.
暂无评论