distributed function computation has a wide spectrum of major applications in distributed systems. distributedcomputation over a network-system proceeds in a sequence of time-steps in which vertices update and/or exc...
详细信息
Parallel and distributed computing network-systems are modeled as graphs with vertices representing compute elements and adjacency-edges capturing their uni- or bi-directional communication. distributedfunction compu...
详细信息
ISBN:
(纸本)9783030356538;9783030356521
Parallel and distributed computing network-systems are modeled as graphs with vertices representing compute elements and adjacency-edges capturing their uni- or bi-directional communication. distributed function computation covers a wide spectrum of major applications, such as quantized consensus and collaborative hypothesis testing, in distributed systems. distributedcomputation over a networksystem proceeds in a sequence of time-steps in which vertices update and/or exchange their values based on the underlying algorithm constrained by the time-(in)variant network-topology. For finite convergence of distributed information dissemination and functioncomputation in the model, we study lower bounds on the number of time-steps for vertices to receive (initial) vertex-values of all vertices regardless of underlying protocol or algorithmics in time-invariant networks via the notion of vertex-eccentricity in a graph-theoretic framework. We prove a lower bound on the maximum vertex-eccentricity in terms of graph-order and -size in a strongly connected directed graph, and demonstrate its optimality via an explicitly constructed family of strongly connected directed graphs.
We derive information-theoretic converses (i.e., lower bounds) for the minimum time required by any algorithm for distributed function computation over a network of point-to-point channels with finite capacity, where ...
详细信息
We derive information-theoretic converses (i.e., lower bounds) for the minimum time required by any algorithm for distributed function computation over a network of point-to-point channels with finite capacity, where each node of the network initially has a random observation and aims to compute a common function of all observations to a given accuracy with a given confidence by exchanging messages with its neighbors. We obtain the lower bounds on computation time by examining the conditional mutual information between the actual function value and its estimate at an arbitrary node, given the observations in an arbitrary subset of nodes containing that node. The main contributions include the following. First, a lower bound on the conditional mutual information via so-called small ball probabilities, which captures the dependence of the computation time on the joint distribution of the observations at the nodes, the structure of the function, and the accuracy requirement. For linear functions, the small ball probability can be expressed by Levy concentration functions of sums of independent random variables, for which tight estimates are available that lead to strict improvements over existing lower bounds on computation time. Second, an upper bound on the conditional mutual information via strong data processing inequalities, which complements and strengthens existing cutset-capacity upper bounds. Finally, a multi-cutset analysis that quantifies the loss (dissipation) of the information needed for computation as it flows across a succession of cutsets in the network. This analysis is based on reducing a general network to a line network with bidirectional links and self-links, and the results highlight the dependence of the computation time on the diameter of the network, a fundamental parameter that is missing from most of the existing lower bounds on computation time.
We derive information-theoretic converses (i.e., lower bounds) for the minimum time required by any algorithm for distributed function computation over a network of point-to-point channels with finite capacity, where ...
详细信息
We derive information-theoretic converses (i.e., lower bounds) for the minimum time required by any algorithm for distributed function computation over a network of point-to-point channels with finite capacity, where each node of the network initially has a random observation and aims to compute a common function of all observations to a given accuracy with a given confidence by exchanging messages with its neighbors. We obtain the lower bounds on computation time by examining the conditional mutual information between the actual function value and its estimate at an arbitrary node, given the observations in an arbitrary subset of nodes containing that node. The main contributions include the following. First, a lower bound on the conditional mutual information via so-called small ball probabilities, which captures the dependence of the computation time on the joint distribution of the observations at the nodes, the structure of the function, and the accuracy requirement. For linear functions, the small ball probability can be expressed by Levy concentration functions of sums of independent random variables, for which tight estimates are available that lead to strict improvements over existing lower bounds on computation time. Second, an upper bound on the conditional mutual information via strong data processing inequalities, which complements and strengthens existing cutset-capacity upper bounds. Finally, a multi-cutset analysis that quantifies the loss (dissipation) of the information needed for computation as it flows across a succession of cutsets in the network. This analysis is based on reducing a general network to a line network with bidirectional links and self-links, and the results highlight the dependence of the computation time on the diameter of the network, a fundamental parameter that is missing from most of the existing lower bounds on computation time.
distributed computing network-systems are modeled as graphs with vertices representing compute elements and adjacency-edges capturing their uni- or bi-directional communication. distributed function computation has a ...
详细信息
We consider a distributed function computation problem where an information sink communicates with N correlated information sources to compute a given deterministic function of source data. A data-vector is drawn from...
详细信息
We consider a distributed function computation problem where an information sink communicates with N correlated information sources to compute a given deterministic function of source data. A data-vector is drawn from a discrete and finite probability distribution and component x i is revealed to ith, , source. We address this problem in asymmetric communication scenarios where only the sink knows the distribution P. We are interested in computing the minimum number of source bits required, in the worst-case, to solve this problem. We propose the notion of functional ambiguity to carry out the worst-case information-theoretic analysis of distributed function computation problems. We establish its various characteristics and prove that it leads to a valid information measure. Then, we provide a constructive solution for the distributed function computation problem in terms of an interactive communication protocol and prove its optimality. Finally, we establish two equivalence classes of compressible and incompressible functions to classify the set of all computable multivariate functions based on the minimum number of source bits needed in the worst-case to compute a function in the distributed function computation set-up.
The nearest lattice point problem in R-n is formulated in a distributed network with n nodes. The objective is to minimize the probability that an incorrect lattice point is found, subject to a constraint on inter-nod...
详细信息
The nearest lattice point problem in R-n is formulated in a distributed network with n nodes. The objective is to minimize the probability that an incorrect lattice point is found, subject to a constraint on inter-node communication. Algorithms with a single as well as an unbounded number of rounds of communication are considered for the case n = 2. For the algorithm with a single round, expressions are derived for the error probability as a function of the total number of communicated bits. We observe that the error exponent depends on the lattice structure and that zero error requires an infinite number of communicated bits. In contrast, with an infinite number of allowed communication rounds, the nearest lattice point can be determined without error with a finite average number of communicated bits and a finite average number of rounds of communication. In two dimensions, the hexagonal lattice, which is most efficient for communication and compression, is found to be the most expensive in terms of communication cost.
distributed computing network systems are modeled as graphs with which vertices represent compute elements and adjacency-edges capture their uni- or bi-directional communication. distributedcomputation over a network...
详细信息
ISBN:
(数字)9783030031923
ISBN:
(纸本)9783030031923;9783030031916
distributed computing network systems are modeled as graphs with which vertices represent compute elements and adjacency-edges capture their uni- or bi-directional communication. distributedcomputation over a network system proceeds in a sequence of time-steps in which vertices update and/or exchange their values based on the underlying algorithm constrained by the time-(in)variant network topology. For finite convergence of distributed information dissemination and functioncomputation in the model, we present a lower bound on the number of time-steps for vertices to receive (initial) vertex-values of all vertices regardless of underlying protocol or algorithmics in time-invariant networks via the notion of vertex-eccentricity.
We present a communication-efficient distributed protocol for computing the Babai point, an approximate nearest point for a random vector X is an element of R-n in a given lattice. We show that the protocol is optimal...
详细信息
We present a communication-efficient distributed protocol for computing the Babai point, an approximate nearest point for a random vector X is an element of R-n in a given lattice. We show that the protocol is optimal in the sense that it minimizes the sum rate when the components of X are mutually independent. We then investigate the error probability, i. e. the probability that the Babai point does not coincide with the nearest lattice point, motivated by the fact that for some cases, a distributed algorithm for finding the Babai point is sufficient for finding the nearest lattice point itself. Two different probability models for X are considered-uniform and Gaussian. For the uniform model, in dimensions two and three, the error probability is seen to grow with the packing density, and we demonstrate that the densest lattice in dimension two presents the worst error probability. For higher dimensions, we develop probabilistic concentration bounds as well as bounds based on geometric arguments for the error probability. The probabilistic bounds lead to the conclusion that for lattices which generate suitably thin coverings of R-n (which includes lattices that meet Rogers' bound on the covering radius), the error probability goes to unity as n grows. Probabilistic and geometric bounds are also used to estimate the error probability under the uniform model for various lattices including the A(n) family and the Leech lattice, Lambda(24). On the other hand, for the Gaussian model, the error probability goes to zero as the lattice dimension tends to infinity, provided the noise variance is sufficiently small.
We consider the problem of communicating over a relay-assisted multiple-input multiple-output (MIMO) channel with additive noise, in which physically separated relays forward quantized information to a central decoder...
详细信息
We consider the problem of communicating over a relay-assisted multiple-input multiple-output (MIMO) channel with additive noise, in which physically separated relays forward quantized information to a central decoder where the transmitted message is to be decoded. We assume that channel state information is available in the transmitter and show that the design of a rational-forcing precoder-a precoder which is matched to the quantizers used in the relays-is beneficial for reducing the symbol error probability. It turns out that for such rationalforcing precoder based systems, there is natural tradeoff between the peak to average power ratio in the transmitter and the rate of communication between the relays and the central decoder. The precoder design problem is formulated mathematically, and several algorithms are developed for realizing this tradeoff. Optimality of the decoder communication rate is shown based on a result in distributed function computation. Numerical and simulation results show that a useful tradeoff can be obtained between the excess decoder communication rate and the peakaverage power ratio in the transmitter.
暂无评论