We consider the sparse stochastic block model in the case where the degrees are uninformative. The case where the two communities have approximately the same size has been extensively studied and we concentrate here o...
详细信息
We consider the sparse stochastic block model in the case where the degrees are uninformative. The case where the two communities have approximately the same size has been extensively studied and we concentrate here on the community detection problem in the case of unbalanced communities. In this setting, spectral algorithms based on the non-backtracking matrix are known to solve the community detection problem (i.e., do strictly better than a random guess) when the signal is sufficiently large namely above the so-called Kesten-Stigum threshold. In this regime and when the average degree tends to infinity, we show that if the community of a vanishing fraction of the vertices is revealed, then a local algorithm (belief propagation) is optimal down to Kesten-Stigum threshold and we quantify explicitly its performance. Below the Kesten-Stigum threshold, we show that, in the large degree limit, there is a second threshold called the spinodal curve below which, the community detection problem is not solvable. The spinodal curve is equal to the Kesten-Stigum threshold when the fraction of vertices in the smallest community is above p* = 1/2 - 1/2 root 3, so that the Kesten-Stigum threshold is the threshold for solvability of the community detection in this case. However when the smallest community is smaller than p*, the spinodal curve only provides a lower bound on the threshold for solvability. In the regime below the Kesten-Stigum bound and above the spinodal curve, we also characterize the performance of best local algorithms as a function of the fraction of revealed vertices. Our proof relies on a careful analysis of the associated reconstruction problem on trees which might be of independent interest. In particular, we show that the spinodal curve corresponds to the reconstruction threshold on the tree.
message-passing algorithms based on belief-propagation (BP) are successfully used in many applications, including decoding error correcting codes and solving constraint satisfaction and inference problems. The BP-base...
详细信息
message-passing algorithms based on belief-propagation (BP) are successfully used in many applications, including decoding error correcting codes and solving constraint satisfaction and inference problems. The BP-based algorithms operate over graph representations, called factor graphs, that are used to model the input. Although in many cases, the BP-based algorithms exhibit impressive empirical results, not much has been proved when the factor graphs have cycles. This paper deals with packing and covering integer programs in which the constraint matrix is zero-one, the constraint vector is integral, and the variables are subject to box constraints. We study the performance of the min-sum algorithm when applied to the corresponding factor graph models of packing and covering linear programmings (LPs). We compare the solutions computed by the min-sum algorithm for packing and covering problems to the optimal solutions of the corresponding LP relaxations. In particular, we prove that if the LP has an optimal fractional solution, then for each fractional component, the minsum algorithm either computes multiple solutions or the solution oscillates below and above the fraction. This implies that the min-sum algorithm computes the optimal integral solution only if the LP has a unique optimal solution that is integral. The converse is not true in general. For a special case of packing and covering problems, we prove that if the LP has a unique optimal solution that is integral and on the boundary of the box constraints, then the min-sum algorithm computes the optimal solution in pseudopolynomial time. Our results unify and extend recent results for the maximum weight matching problem and for the maximum weight independent set problem.
Estimation of distribution algorithms (EDAs) that use marginal product model factorizations have been widely applied to a broad range of mainly binary optimization problems. In this paper, we introduce the affinity pr...
详细信息
Estimation of distribution algorithms (EDAs) that use marginal product model factorizations have been widely applied to a broad range of mainly binary optimization problems. In this paper, we introduce the affinity propagation EDA (AffEDA) which learns a marginal product model by clustering a matrix of mutual information learned from the data using a very efficient message-passing algorithm known as affinity propagation. The introduced algorithm is tested on a set of binary and nonbinary decomposable functions and using a hard combinatorial class of problem known as the HP protein model. The results show that the algorithm is a very efficient alternative to other EDAs that use marginal product model factorizations such as the extended compact genetic algorithm (ECGA) and improves the quality of the results achieved by ECGA when the cardinality of the variables is increased.
We consider decoding of binary linear Tanner codes using message-passing iterative decoding and linear-programming (LP) decoding in memoryless binary-input output-symmetric (MBIOS) channels. We present new certificate...
详细信息
We consider decoding of binary linear Tanner codes using message-passing iterative decoding and linear-programming (LP) decoding in memoryless binary-input output-symmetric (MBIOS) channels. We present new certificates that are based on a combinatorial characterization for the local optimality of a codeword in irregular Tanner codes with respect to any MBIOS channel. This characterization is a generalization of (Arora et al., Proc. ACM Symp. Theory of Computing, 2009) and (Vontobel, Proc. Inf. Theory and Appl. Workshop, 2010) and is based on a conical combination of normalized weighted subtrees in the computation trees of the Tanner graph. These subtrees may have any finite height (even equal or greater than half of the girth of the Tanner graph). In addition, the degrees of local-code nodes in these subtrees are not restricted to two (i.e., these subtrees are not restricted to skinny trees). We prove that local optimality in this new characterization implies maximum-likelihood (ML) optimality and LP optimality, and show that a certificate can be computed efficiently. We also present a new message-passing iterative decoding algorithm, called normalized weighted min-sum (NWMS). NWMS decoding is a belief-propagation (BP) type algorithm that applies to any irregular binary Tanner code with single parity-check local codes (e. g., low-density and high-density parity-check codes). We prove that if a locally optimal codeword with respect to height parameter exists (whereby notably is not limited by the girth of the Tanner graph), then NWMS decoding finds this codeword in iterations. The decoding guarantee of the NWMS decoding algorithm applies whenever there exists a locally optimal codeword. Because local optimality of a codeword implies that it is the unique ML codeword, the decoding guarantee also provides an ML certificate for this codeword. Finally, we apply the new local-optimality characterization to regular Tanner codes, and prove lower bounds on the noise threshol
We develop distributed algorithms for efficient spectrum access strategies in cognitive radio relay networks. In our setup, primary users permit secondary users access to the resource (spectrum) as long as they consen...
详细信息
We develop distributed algorithms for efficient spectrum access strategies in cognitive radio relay networks. In our setup, primary users permit secondary users access to the resource (spectrum) as long as they consent to aiding the primary users as relays in addition to transmitting their own data. Given a pool of primary and secondary users, we desire to optimize overall network utility by determining the best configuration/pairing of secondary users with primary users. This optimization can be stated in a form similar to the maximum weighted matching problem. Given such formulation, we develop an algorithm based on affinity propagation technique that is completely distributed in its structure. We demonstrate the convergence of the developed algorithm and show that it performs close to the optimal centralized scheme.
We establish the convergence of the min-sum messagepassing algorithm for minimization of a quadratic objective function given a convex decomposition. Our results also apply to the equivalent problem of the convergenc...
详细信息
We establish the convergence of the min-sum messagepassing algorithm for minimization of a quadratic objective function given a convex decomposition. Our results also apply to the equivalent problem of the convergence of Gaussian belief propagation.
An initial bootstrap step for the decoding of low-density parity-check (LDPC) codes is proposed. Decoding is initiated by first erasing a number of less reliable bits. New values and reliabilities are then assigned to...
详细信息
An initial bootstrap step for the decoding of low-density parity-check (LDPC) codes is proposed. Decoding is initiated by first erasing a number of less reliable bits. New values and reliabilities are then assigned to erasure bits by passingmessages from nonerasure bits through the reliable check equations. The bootstrap step is applied to the weighted bit-flipping algorithm to decode a number of LDPC codes. Large improvements in both performance and complexity are observed.
The application of successive relaxation (SR) to the fixed-point problem associated with the iterative decoding of low-density parity-check (LDPC) codes was proposed by Hemati et al. The simulation results presented b...
详细信息
The application of successive relaxation (SR) to the fixed-point problem associated with the iterative decoding of low-density parity-check (LDPC) codes was proposed by Hemati et al. The simulation results presented by Hemati et al. for the SR version of belief propagation (BP) in the likelihood ratio (LR) domain and that of min-sum (MS) in the log-liketihood ratio (LLR) domain are based on the assumption of all-zero codeword transmission. This assumption however results in erroneous error rates when SR is applied in the LR domain. Here, we correct the simulation results reported by Hemati et al. for SR-BP in the LR domain.. Furthermore, we investigate the performance of SR-BP and SR-MS in the LLR and LR domains, respectively. The results for a binary input additive white Gaussian noise (BIAWGN) channel show that for both BP and MS, the application of SR in the two domains of LR and LLR results in different error correcting performance. In particular, for the tested codes, it is shown that among the four algorithms, SR-MS-LLR has the best performance. It outperforms standard MS and BP by up to about 0.6 dB and 0.3dB, respectively, offering an attractive solution in terms of performance/complexity tradeoff.
Cascading failures in critical networked infrastructures that result even from a single source of failure often lead to rapidly widespread outages as witnessed in the 2013 Northeast blackout in Northern America. The e...
详细信息
Cascading failures in critical networked infrastructures that result even from a single source of failure often lead to rapidly widespread outages as witnessed in the 2013 Northeast blackout in Northern America. The ensuing problem of containing future cascading failures by placement of protection or monitoring nodes in the network is complicated by the uncertainty of the failure source and the missing observation of how the cascading might unravel, be it the past cascading failures or the future ones. This paper examines the problem of minimizing the outage when a cascading failure from a single source occurs. A stochastic optimization problem is formulated where a limited number of protection nodes, when placed strategically in the network to mitigate systemic risk, can minimize the expected spread of cascading failure. We propose the vaccine centrality, which is a network centrality based on the partially ordered sets (poset) characteristics of the stochastic program and distributed message-passing, to design efficient approximation algorithms with provable approximation ratio guarantees. In particular, we illustrate how the vaccine centrality and the poset-constrained graph algorithms can be designed to tradeoff between complexity and optimality, as illustrated through a series of numerical experiments. This paper points toward a general framework of network centrality as statistical inference to design rigorous graph analytics for statistical problems in networks.
In this paper, we design a receiver that iteratively passes soft information between the channel estimation and data decoding stages. The receiver incorporates sparsity-based parametric channel estimation. State-of-th...
详细信息
In this paper, we design a receiver that iteratively passes soft information between the channel estimation and data decoding stages. The receiver incorporates sparsity-based parametric channel estimation. State-of-the-art sparsity-based iterative receivers simplify the channel estimation problem by restricting the multipath delays to a grid. Our receiver does not impose such a restriction. As a result, it does not suffer from the leakage effect, which destroys sparsity. Communication at near capacity rates in high SNR requires a large modulation order. Due to the close proximity of modulation symbols in such systems, the grid-based approximation is of insufficient accuracy. We show numerically that a state-of-the-art iterative receiver with grid-based sparse channel estimation exhibits a bit-error-rate floor in the high SNR regime. On the contrary, our receiver performs very close to the perfect channel state information bound for all SNR values. We also demonstrate both theoretically and numerically that parametric channel estimation works well in dense channels, i.e., when the number of multipath components is large and each individual component cannot be resolved.
暂无评论