A wireless sensor network consists of a large number of small, resource-constrained devices and usually operates in hostile environments that are prone to link and node failures. Computing aggregates such as average, ...
详细信息
A wireless sensor network consists of a large number of small, resource-constrained devices and usually operates in hostile environments that are prone to link and node failures. Computing aggregates such as average, minimum, maximum and sum is fundamental to various primitive functions of a sensor network, such as system monitoring, data querying, and collaborative information processing. In this paper, we present and analyze a suite of randomized distributed algorithms to efficiently and robustly compute aggregates. Our Distributed Random Grouping (DRG) algorithm is simple and natural and uses probabilistic grouping to progressively converge to the aggregate value. DRG is local and randomized and is naturally robust against dynamic topology changes from link/node failures. Although our algorithm is natural and simple, it is nontrivial to show that it converges to the correct aggregate value and to bound the time needed for convergence. Our analysis uses the eigenstructure of the underlying graph in a novel way to show convergence and to bound the running time of our algorithms. We also present simulation results of our algorithm and compare its performance to various other known distributed algorithms. Simulations show that DRG needs far fewer transmissions than other distributed localized schemes.
This paper investigates the problem of nonfragile H-infinity and H-2 filter designs for continuous-time linear systems. Additive filter gain variations to reflect the imprecision in filter implementation are considere...
详细信息
This paper investigates the problem of nonfragile H-infinity and H-2 filter designs for continuous-time linear systems. Additive filter gain variations to reflect the imprecision in filter implementation are considered. The nonfragile filter design is first formulated as a robust convex optimization problem. Then, both deterministic and randomized algorithms are employed to solve the obtained robust convex optimization problem. Compared with the deterministic algorithm, the proposed randomized one has two advantages: On one hand, it has acceptable computational complexity for systems with high dimensions;on the other hand, it can alleviate the conservatism of deterministic algorithms. Several examples are given to illustrate the effectiveness of the proposed method.
We prove that the randomized competitive ratio of online chain partitioning of posets equals the deterministic competitive ratio. (c) 2012 Elsevier B.V. All rights reserved.
We prove that the randomized competitive ratio of online chain partitioning of posets equals the deterministic competitive ratio. (c) 2012 Elsevier B.V. All rights reserved.
In the multislope ski rental problem, the user needs a certain resource for some unknown period of time. To use the resource, the user must subscribe to one of several options, each of which consists of a one-time set...
详细信息
In the multislope ski rental problem, the user needs a certain resource for some unknown period of time. To use the resource, the user must subscribe to one of several options, each of which consists of a one-time setup cost ("buying price") and cost proportional to the duration of the usage ("rental rate"). The larger the price, the smaller the rent. The actual usage time is determined by an adversary, and the goal of an algorithm is to minimize the cost by choosing the best alternative at any point in time. Multislope ski rental is a natural generalization of the classical ski rental problem (where there are only two available alternatives, namely pure rent and pure buy), which is one of the fundamental problems of online computation. The multislope ski rental problem is an abstraction of many problems, where online choices cannot be modeled by just two alternatives, e. g., power management in systems which can be shut down in parts. In this paper we study randomized algorithms for multislope ski rental. Our results include an algorithm that produces the best possible online randomized strategy for any additive instance, where the cost of switching from one alternative to another is the difference in their buying prices, and an e-competitive randomized strategy for any (not necessarily additive) instance.
We analyze the parallel performance of randomized interpolative decomposition by decomposing low rank complex-valued Gaussian random matrices of about 100 GB. We chose a Cray XMT supercomputer as it provides an almost...
详细信息
We analyze the parallel performance of randomized interpolative decomposition by decomposing low rank complex-valued Gaussian random matrices of about 100 GB. We chose a Cray XMT supercomputer as it provides an almost ideal PRAM model permitting quick investigation of parallel algorithms without obfuscation from hardware idiosyncrasies. We obtain that on non-square matrices performance scales almost linearly with runtime about 100 times faster on 128 processors. We also verify that numerically discovered error bounds still hold on matrices two orders of magnitude larger than those previously tested.
Sequential randomized algorithms are considered for robust convex optimization which minimizes a linear objective function subject to a parameter dependent convex constraint. Employing convex optimization and random s...
详细信息
Sequential randomized algorithms are considered for robust convex optimization which minimizes a linear objective function subject to a parameter dependent convex constraint. Employing convex optimization and random sampling of parameter, these algorithms enable us to obtain a suboptimal solution within reasonable computational time. The suboptimal solution is feasible in a probabilistic sense and the suboptimal value belongs to an interval which contains the optimal value. The maximum of the interval is the optimal value of the robust convex optimization plus a specified tolerance. On the other hand, its minimum is the optimal value of the chance constrained optimization which is a probabilistic relaxation of the robust convex optimization, with high probability.
Fitting two-dimensional conic sections (e.g,, circular and elliptical arcs) to a finite collection of points in the plane is an important problem in statistical estimation and has significant industrial applications. ...
详细信息
Fitting two-dimensional conic sections (e.g,, circular and elliptical arcs) to a finite collection of points in the plane is an important problem in statistical estimation and has significant industrial applications. Recently there has been a great deal of interest in robust estimators, because of their lack of sensitivity to outlying data points. The basic measure of the robustness of an estimator is its breakdown point, that is, the fraction (up to 50%) of outlying data points that can corrupt the estimator. In this paper we introduce nonlinear Theil-Sen and repeated median (RM) variants for estimating the center and radius of a circular are, and for estimating the center and horizontal and Vertical radii of an axis-aligned ellipse. The circular are estimators have breakdown points of approximate to 21% and 50%. respectively, and the ellipse estimators have breakdown points of approximate to 16% and 50%, respectively. We present randomized algorithms for these estimators, whose expected running times are O(n(2) log n) for the circular case and O(n(3) log n) for the elliptical case. All algorithms use O(n) space in the worst case, (C) 2001 Elsevier Science B.V. All rights reserved.
In the majority problem, we are given n balls coloured black or white and we are allowed to query whether two balls have the same colour or not. The goal is to find a ball of majority colour in the minimum number of q...
详细信息
In the majority problem, we are given n balls coloured black or white and we are allowed to query whether two balls have the same colour or not. The goal is to find a ball of majority colour in the minimum number of queries. The answer is known to be n - B(n) where B(n) is the number of 1's in the binary representation of n. In this paper we study randomized algorithms for determining majority, which are allowed to err with probability at most epsilon. We show that any such algorithm must have expected running time at least (2/3 - o(1)) n. Moreover, we provide a randomized algorithm which shows that this result is best possible. These extend a result of De Marco and Pelc [G. De Marco, A. Pelc, randomized algorithms for determining the majority on graphs, Combin. Probab. Comput. 15 (2006) 823-834]. (C) 2008 Elsevier B.V. All rights reserved.
In this paper we study randomized algorithms with random input. We adapt to such algorithms the notion of probability of a false positive which is common in epidemiological studies. The probability of a false positive...
详细信息
In this paper we study randomized algorithms with random input. We adapt to such algorithms the notion of probability of a false positive which is common in epidemiological studies. The probability of a false positive takes into account both the (controlled) error of the randomization and the randomness of the input, which needs to be modeled. We illustrate our idea on two classes of problems: primality testing and fingerprinting in strings transmission. Although in both cases the randomization has low error, in the first one the probability of a false positive is very low, while in the second one it is not. We end the paper with a discussion of randomness illustrated in a textbook example. (C) 2000 Academic Press.
A massively parallel optimization approach based on simple neighbourhood search techniques is developed and applied to the problem of VLSI cell placement. Statistical models are developed to analyse the performance of...
详细信息
A massively parallel optimization approach based on simple neighbourhood search techniques is developed and applied to the problem of VLSI cell placement. Statistical models are developed to analyse the performance of the approach in general, and to derive statistical bounds on the quality of obtainable results. Specific questions addressed are: (1) Given a solution with a known cost, how can we measure its quality? (2) Given a target cost for the solution, how likely is the algorithm to generate a solution with that cost or better? (3) Are there any performance bounds for the solutions obtainable by neighbourhood search methods? (4) How can we measure or quantify the performance of different neighbourhood search methods? The results of these analyses suggest a simple framework for approximate solution of difficult problems. The approach is inherently parallel, and it can be implemented on any type of parallel computer. We implemented it on the PVM environment running on a network of workstations connected by Ethernet. The method is empirically verified by testing its performance on a number of sample problems and by comparing the results found to earlier results reported in the literature.
暂无评论