In 1986, Saks and Wigderson conjectured that the largest separation between deterministic and zero-error randomized query complexity for a total boolean function is given by the function f on n = 2(k) bits defined by ...
详细信息
ISBN:
(纸本)9781450341325
In 1986, Saks and Wigderson conjectured that the largest separation between deterministic and zero-error randomized query complexity for a total boolean function is given by the function f on n = 2(k) bits defined by a complete binary tree of NAND gates of depth k, which achieves R-0(f) = O(D(f)(0.7537...)). We show this is false by giving an example of a total boolean function f on n bits whose deterministic query complexity is Omega(n/log(n)) while its zero-error randomized query complexity is (O) over tilde(root n). We further show that the quantum query complexity of the same function is (O) over tilde (n(1/4)), giving the first example of a total function with a super-quadratic gap between its quantum and deterministic query complexities. We also construct a total boolean function g on n variables that has zero-error randomized query complexity Omega(n/log(n)) and bounded-error randomized query complexity R(g) = (O) over tilde(root n). This is the first super-linear separation between these two complexity measures. The exact quantum query complexity of the same function is Q(E)(g) = (O) over tilde(root n). These functions show that the relations D(f) = O(R-1(f)(2)) and R-0(f) = (O) over tilde (R(f)(2)) are optimal, up to poly-logarithmic factors. Further variations of these functions give additional separations between other query complexity measures: a cubic separation between Q and R-0, a 3/2-power separation between Q(E) and R, and a 4th power separation between approximate degree and bounded-error randomized query complexity. All of these examples are variants of a function recently introduced by Goos, Pitassi, and Watson which they used to separate the unambiguous 1-certificate complexity from deterministic query complexity and to resolve the famous Clique versus Independent Set problem in communication complexity.
A classical problem in addressing a decentralized multiple-access channel is resolving conflicts when a set of stations attempt to transmit at the same time on a shared communication channel. In a static scenario, i.e...
详细信息
A classical problem in addressing a decentralized multiple-access channel is resolving conflicts when a set of stations attempt to transmit at the same time on a shared communication channel. In a static scenario, i.e., when all stations are activated simultaneously, Komlos and Greenberg [IEEE Trans. Inform. Theory, 31 (1985), pp. 302-306] in their seminal work showed that it is possible to resolve the conflict among k stations from an ensemble of n, with a nonadaptive deterministic algorithm in time O(k + k log(n/k)) in the worst case. In this paper we show that in a dynamic scenario, when the stations can join the channel at arbitrary rounds, there is a nonadaptive deterministic algorithm guaranteeing a successful transmission for each station in only a slightly bigger time: O(k log n log log n) in the worst case. This almost matches the Omega(k log n/log k) lower bound by Greenberg and Winograd [J. ACM, 32 (1985), pp. 589-596] that holds even in much stronger settings: for adaptive algorithms, in the static scenario, and with additional channel feedback-collision detection. In terms of channel utilization, our result implies throughput, understood as the average number of successful transmissions per time unit, Omega(1/(log n log log n)) on the dynamic deterministic channel.
For every fixed constant α > 0, we design an algorithm for computing the k-sparse Walsh-Hadamard transform (i.e., Discrete Fourier Transform over the Boolean cube) of an N-dimensional vector x ∈ R~N in time k~(1+...
详细信息
ISBN:
(纸本)9781510819672
For every fixed constant α > 0, we design an algorithm for computing the k-sparse Walsh-Hadamard transform (i.e., Discrete Fourier Transform over the Boolean cube) of an N-dimensional vector x ∈ R~N in time k~(1+α)(logN)~(O(1)). Specifically, the algorithm is given query access to x and computes a k-sparse ~x ∈ R~N satisfying ‖x - x‖1 ≤ ‖x - H_k(x)‖1, for an absolute constant c > 0, where x is the transform of x and H_k(x) is its best k-sparse approximation. Our algorithm is fully deterministic and only uses non-adaptive queries to x (i.e., all queries are determined and performed in parallel when the algorithm starts). An important technical tool that we use is a construction of nearly optimal and linear lossless condensers which is a careful instantiation of the GUV condenser (Guruswami, Umans, Vadhan, JACM 2009). Moreover, we design a deterministic and non-adaptive l_1/l_1 compressed sensing scheme based on general lossless condensers that is equipped with a fast reconstruction algorithm running in time k~(1+α)(logN)~(O(1)) (for the GUV-based condenser) and is of independent interest. Our scheme significantly simplifies and improves an earlier expander-based construction due to Berinde, Gilbert, Indyk, Karloff, Strauss (Allerton 2008). Our methods use linear lossless condensers in a black box fashion; therefore, any future improvement on explicit constructions of such condensers would immediately translate to improved parameters in our framework (potentially leading to k(logN)~(O(1)) reconstruction time with a reduced exponent in the poly-logarithmic factor, and eliminating the extra parameter α). By allowing the algorithm to use randomness, while still using non-adaptive queries, the running time of the algorithm can be improved to ~O(k log~3 N).
This paper presents a deterministic algorithm for the synthesis of sparse arrays with far-field and near-field constraints. Starting from an initial set of possible positions, the algorithm iteratively solves a sequen...
详细信息
ISBN:
(纸本)9781479978090
This paper presents a deterministic algorithm for the synthesis of sparse arrays with far-field and near-field constraints. Starting from an initial set of possible positions, the algorithm iteratively solves a sequence of convex optimization problems with the objective of minimizing the number of radiating elements among those of the initial set, in presence of constraints on the far-field pattern and on the amplitude of the electric field at prescribed points located in the near-field region of the antenna. The method is suitable for sparse arrays of arbitrary geometry. A numerical example is presented to show the effectiveness of the proposed approach.
Maximum cardinality matching is a fundamental algorithmic problem with many algorithms and applications. The fully dynamic version, in which edges are inserted and deleted over time has also been the subject of much a...
详细信息
ISBN:
(纸本)9781510819672
Maximum cardinality matching is a fundamental algorithmic problem with many algorithms and applications. The fully dynamic version, in which edges are inserted and deleted over time has also been the subject of much attention. Existing algorithms for dynamic matching (in general n-vertex m-edge graphs) fall into two groups: there are fast (mostly randomized) algorithms that achieve a 2-approximation or worse, and there are slow algorithms with Ω({the square root of}m) update time that achieve a better-than-2 approximation. Thus the obvious question is whether we can design an algorithm that achieves a tradeoff between these two: a o({the square root of}m) update time and a better-than-2 approximation simultaneously. We answer this question in the affirmative. Previously, such bounds were only known for the special case of bipartite graphs. Our main result is a fully dynamic deterministic algorithm that maintains a (3/2 + ε)-approximation in amortized update time O(m~(1/4)ε~(-2.5)). In addition to achieving the trade-off described above, our algorithm manages to be polynomially faster than all existing deterministic algorithms (excluding an existing log n-approximation of Onak and Rubinfeld), while still maintaining a better-than-2 approximation. We also give stronger results for graphs whose arboricity is at most α. We show how to maintain a (1 + ε)-approximate fractional matching or a (3/2+ε)-approximate integral matching in worst-case time O(α(α + log n)) for constant ε. When the arboricity is constant, this bound is O(log n) and when the arboricity is polylogarithmic the update time is also polylogarithmic. Previous results for small arboricity non-bipartite graphs could only maintain a maximal matching (2-approximation). We maintain the approximate matching without explicitly using augmenting paths. We define an intermediate graph, called an EDCS and show that the EDCS H contains a large matching, and show how to maintain an EDCS in G. The EDCS was used in previ
In rendezvous, two agents traverse network edges in synchronous rounds and have to meet at some node. In treasure hunt, a single agent has to find a stationary target situated at an unknown node of the network. We stu...
详细信息
In rendezvous, two agents traverse network edges in synchronous rounds and have to meet at some node. In treasure hunt, a single agent has to find a stationary target situated at an unknown node of the network. We study tradeoffs between the amount of information (advice) available a priori to the agents and the cost (number of edge traversals) of rendezvous and treasure hunt. Our goal is to find the smallest size of advice which enables the agents to solve these tasks at some cost C in a network with e edges. This size turns out to depend on the initial distance D and on the ratio e/C, which is the relative cost gain due to advice. For arbitrary graphs, we give upper and lower bounds of O(D log(D.e/C) +log log e) and Omega (D log e/C), respectively, on the optimal size of advice. For the class of trees, we give nearly tight upper and lower bounds of O(D log e/C + log log e) and Omega (D log e/C), respectively. (C) 2015 Elsevier Inc. All rights reserved.
Consider a synchronous radio network of n stationary nodes represented by an undirected graph with maximum degree Delta. Suppose that each node has a unique ID from {1,..., U}, where U >> n. In the neighbourhood...
详细信息
Consider a synchronous radio network of n stationary nodes represented by an undirected graph with maximum degree Delta. Suppose that each node has a unique ID from {1,..., U}, where U >> n. In the neighbourhood learning task, each node must produce a list of the IDs of its neighbours in the network. We prove new lower bounds on the number of slots needed by certain classes of deterministic algorithms that solve this task. First, we show that O(U)-slot round-robin algorithms are optimal for the class of collision-free algorithms. Then, we consider algorithms where each node fixes its entire transmission schedule at the start. For such algorithms, we prove a Omega (Delta(2)/log Delta log U)-slot lower bound on schedule length that holds in very general models, e.g., when nodes possess collision detectors, messages can be of arbitrary size, and nodes know the schedules being followed by all other nodes. We also prove a similar result for the SINR model of radio networks. To prove these results, we consider a generalization of cover-free families of sets. We also show a separation between the class of fixed-schedule algorithms and the class of algorithms where nodes can choose to leave out some transmissions from their schedule. (C) 2015 Elsevier B.V. All rights reserved.
Approximate matchings in fully dynamic graphs have been intensively studied in recent years. Gupta and Peng [FOCS'13] presented a deterministic algorithm for maintaining fully dynamic (1+ε)-approximate maximum ca...
详细信息
ISBN:
(纸本)9781510819672
Approximate matchings in fully dynamic graphs have been intensively studied in recent years. Gupta and Peng [FOCS'13] presented a deterministic algorithm for maintaining fully dynamic (1+ε)-approximate maximum cardinality matching (MCM) in general graphs with worst-case update time O({the square root of}m·ε~(-2)), for any ε > 0, where m denotes the current number of edges in the graph. Despite significant research efforts, this {the square root of}m update time barrier remains the state-ofthe-art even if amortized time bounds and randomization are allowed or the approximation factor is allowed to increase from 1 + ε to 2 - ε, and even in basic graph families such as planar graphs. This paper presents a simple deterministic algorithm whose performance depends on the density of the graph. Specifically, we maintain fully dynamic (1 + ε)-approximate MCM with worst-case update time O(α · ε~(-2)) for graphs with arboricity bounded by α. The update time bound holds even if the arboricity bound α changes dynamically. Since the arboricity ranges between 1 and pm, our densitysensitive bound O(α · ε_(-2)) naturally generalizes the O({the square root of}m· ε~(-2)) bound of Gupta and Peng. For the family of bounded arboricity graphs (which includes forests, planar graphs, and graphs excluding a fixed minor), in the regime ε = O(1) our update time reduces to a constant. This should be contrasted with the previous best 2-approximation results for bounded arboricity graphs, which achieve either an O(log n) worst-case bound (Kopelowitz et al., ICALP'14) or an O({the square root of}(log n)) amortized bound (He et al., ISAAC'14), where n stands for the number of vertices in the graph. En route to this result, we provide local algorithms of independent interest for maintaining fully dynamic approximate matching and vertex cover.
In this paper we consider the following online pricing problem. An auctioneer is selling identical items in unlimited supply, whereas each bidder from a given set is interested in purchasing a single copy of the item....
详细信息
ISBN:
(纸本)9781510819672
In this paper we consider the following online pricing problem. An auctioneer is selling identical items in unlimited supply, whereas each bidder from a given set is interested in purchasing a single copy of the item. Each bidder is characterized by a budget and a time interval, in which he is considering to buy the item. Bidders are willing to buy the item at the earliest time provided it is within their time intervals and the price at that time is within their budgets. We call such bidders impatient bidders. The problem is considered in the online setting, i.e., each bidder arrives at the start of his time interval, and only then an algorithm learns of his existence and his budget. The goal of the seller is to set the price of the item over time so that the total revenue is maximized. We study two versions of the impatient bidders problem: the one introduced by Bansal et al. [TALG'10], and a more restricted setting in which the deadline of each bidder remains unknown until it is hit. We give tight bounds for both settings. Rather surprisingly, in both cases the optimum competitive ratios are the same. In particular we prove that the competitive ratio of an optimum deterministic algorithm is Θ(log h/log log h), whereas for randomized algorithms it is Θ(log log h).
Discovering patterns in biological sequences is very important to extract useful information from them. Motifs are crucial patterns that have numerous applications including the identification of transcription factors...
详细信息
ISBN:
(纸本)9781509016129
Discovering patterns in biological sequences is very important to extract useful information from them. Motifs are crucial patterns that have numerous applications including the identification of transcription factors and their binding sites, composite regulatory patterns, similiarity between families of proteins, etc. Several models of motifs have been proposed in the literature. The (l,d)-motif model is one of these that has been studied widely. The (l,d)-motif search problem is also known as Planted Motif Search (PMS). The general problem of PMS has been proven to be NP-hard. In this paper, we present an elegant as well as efficient randomized algorithm, named qPMS10, to solve PMS. Currently, the best known algorithm for solving PMS is qPMS9 and it can solve challenging (l, d)-motif instances up to (28,12) and (30,13). qPMS9 is a deterministic algorithm. We provide a performance comparison of qPMS10 with qPMS9 on standard benchmark datasets. Both theoretical and empirical analysis demonstrate that our randomized algorithm outperforms the exsiting algorithms for solving PMS. Besides, the random sampling techniques we employ in our algorithm can also be extended to solve other motif search problems including Simple Motif Search (SMS) and Edit-distance based Motif Search (EMS). Furthermore, our algorithm can be parallelized efficiently and has the potential of yielding great speedups on multi-core machines.
暂无评论