The main contribution of this work is to propose energy-efficient randomized initialization protocols for ad-hoc radio networks (ARN for short). First, we show that if the number n of stations is known beforehand, the...
详细信息
The main contribution of this work is to propose energy-efficient randomized initialization protocols for ad-hoc radio networks (ARN for short). First, we show that if the number n of stations is known beforehand, the single-channel ARN can be initialized by a protocol that terminates, with high probability, in O(n) time slots with no station being awake for more than O(log n) time slots. We then go on to address the case where the number n of stations in the ARN is not known beforehand;We begin by discussing, an elegant protocol that provides a tight approximation of n. Interestingly, this protocol terminates, with high probability, in O((log n)(2)) time slots and no station has to be awake for more than O(log n) time slots. We use this protocol to design an energy-efficient initialization protocol that terminates, with high probability, in O(n) time slots with no station being awake for more than O(log n) time slots. Finally, we design an energy-efficient initialization protocol for the k-channel ARN that terminates, with high probability, in O(n/k + log n) time slots, with no station being awake for more than O(log n) time slots.
We consider on-line and off-line algorithms for special cases of preemptive job shop scheduling to minimize makespan. These special cases are of interest because they commonly arise in the scheduling of computer syste...
详细信息
The subject of this paper is the design and analysis of Monte Carlo algorithms for two basic matching techniques used in model-based recognition: alignment, and geometric hashing. We first give analyses of our Monte C...
详细信息
The subject of this paper is the design and analysis of Monte Carlo algorithms for two basic matching techniques used in model-based recognition: alignment, and geometric hashing. We first give analyses of our Monte Carlo algorithms, showing that they are asymptotically faster than their deterministic counterparts while allowing failure probabilities that are provably very small. We then describe experimental results that bear out this speed-up, suggesting that randomization results in significant improvements in running time. Our theoretical analyses are not the best possible;as a step to remedying this we define a combinatorial measure of self-similarity for point sets, and give an example of its power. (C) 1999 Elsevier Science B.V. All rights reserved.
In Mutual Search, recently introduced by Buhrman et al. (1998), static agents are searching for each other: each agent is assigned one of n locations, and the computations proceed by agents sending queries from their ...
详细信息
In Mutual Search, recently introduced by Buhrman et al. (1998), static agents are searching for each other: each agent is assigned one of n locations, and the computations proceed by agents sending queries from their location to other locations, until one of the queries arrives at the other agent. The cost of a search is the number of queries made. The best known bounds for randomized protocols using private coins are (1) a protocol with worst-case expected cost of [(n + 1)/2], and (2) a lower bound of (n - 1)/8 queries for randomized protocols which make only a bounded number of coin-tosses. In this paper we strictly improve the lower bound, and present a new upper bound for shared random coins. Specifically, we first prove that the worst-case expected cost of any randomized protocol for two-agent mutual search is at least (Ir + 1)/3. This is an improvement both in terms of number of queries and in terms of applicability. We also give a randomized algorithm for mutual search with worst-case expected cost of (n + 1)/3. This algorithm works under the assumption that the agents share a random bit string. This bound shows that no better lower bound can be obtained using our technique. (C) 1999 Published by Elsevier Science B.V. All rights reserved.
This paper studies the point location problem in Delaunay triangulations without preprocessing and additional storage. The proposed procedure finds the query point by simply "walking through" the triangulati...
详细信息
This paper studies the point location problem in Delaunay triangulations without preprocessing and additional storage. The proposed procedure finds the query point by simply "walking through" the triangulation, after selecting a "good starting point" by random sampling. The analysis generalizes and extends a recent result for d = 2 dimensions by proving this procedure takes expected time close to O(n(1/(d+1))) for point location in Delaunay triangulations of n random points in d = 3 dimensions. Empirical results in both two and three dimensions show that this procedure is efficient in practice. (C) 1999 Elsevier Science B.V. All rights reserved.
作者:
Smith, JREscher Labs
Cambridge MA 02142 USA MIT
Media Lab Phys & Media Grp Cambridge MA 02139 USA
Endowing initially identical processors with the capability to become unique is a practical problem with interesting theoretical structure lurking beneath.
Endowing initially identical processors with the capability to become unique is a practical problem with interesting theoretical structure lurking beneath.
A number of current technologies allow for the determination of interatomic distance information in structures such as proteins and RNA. Thus, the reconstruction of a three-dimensional set of points using information ...
详细信息
A number of current technologies allow for the determination of interatomic distance information in structures such as proteins and RNA. Thus, the reconstruction of a three-dimensional set of points using information about its interpoint distances has become a task of basic importance in determining molecular structure. The distance measurements one obtains from techniques such as NMR are typically sparse and error-prone, greatly complicating the reconstruction task. Many of these errors result in distance measurements that can be safely assumed to lie within certain fixed tolerances. But a number of sources of systematic error in these experiments lead to inaccuracies in the data that are very hard to quantify;in effect, one must treat certain entries of the measured distance matrix as being arbitrarily "corrupted." The existence of arbitrary errors leads to an interesting sort of error-correction problem-how many corrupted entries in a distance matrix can be efficiently corrected to produce a consistent three-dimensional structure? For the case of an n x n matrix in which every entry is specified, we provide a randomized algorithm running in time O(n log n) that enumerates all structures consistent with at most (1/2 - epsilon)n errors per row, with high probability. In the case of randomly located errors, we can correct errors of the same density in a sparse matrix-one in which only a beta fraction of the entries in each row are given, for any constant beta > 0.
We present a consensus algorithm that combines unreliable failure detection and randomization, two well-known techniques for solving consensus in asynchronous systems with crash failures. This hybrid algorithm combine...
详细信息
We present a consensus algorithm that combines unreliable failure detection and randomization, two well-known techniques for solving consensus in asynchronous systems with crash failures. This hybrid algorithm combines advantages from both approaches: it guarantees deterministic termination if the failure detector is accurate, and probabilistic termination otherwise. In executions with no failures or failure detector mistakes, the most likely ones in practice, consensus is reached in only two asynchronous rounds.
Maxima in R-d are found incrementally by maintaining a linked list and comparing new elements against the linked list. If the elements are independent and uniformly distributed in the unit square [0, 1](d), then, rega...
详细信息
Maxima in R-d are found incrementally by maintaining a linked list and comparing new elements against the linked list. If the elements are independent and uniformly distributed in the unit square [0, 1](d), then, regardless of how the list is manipulated by an adversary, the expected time is O(n log(d-2) n). This should be contrasted with the fact that the expected number of maxims grows as log(d-1) n, so no adversary can force an expected complexity of n log(d-1) n. Note that the expected complexity is O(n) for d = 2. Conversely, there are list-manipulating adversaries for which the given bound is attained. However, if we naively add maxima to the list without changing the order. then the expected number of element comparisons is n + o(n) for any d greater than or equal to 2. In the paper we also derive new tail bounds and moment inequalities for the number of maxima.
A very natural randomized algorithm for distributed vertex coloring of graphs is analyzed. Under the assumption that the random choices of processors are mutually independent, the execution time will be O(log n) round...
详细信息
A very natural randomized algorithm for distributed vertex coloring of graphs is analyzed. Under the assumption that the random choices of processors are mutually independent, the execution time will be O(log n) rounds almost always. A small modification of the algorithm is also proposed. (C) 1999 Elsevier Science B.V. All rights reserved.
暂无评论