This paper considers randomized discrete-time consensus systems that preserve the average "on average". As a main result, we provide an upper bound on the mean square deviation of the consensus value from th...
详细信息
This paper considers randomized discrete-time consensus systems that preserve the average "on average". As a main result, we provide an upper bound on the mean square deviation of the consensus value from the initial average. Then, we apply our result to systems in which few or weakly correlated interactions take place: these assumptions cover several algorithms proposed in the literature. For such systems we show that, when the network size grows, the deviation tends to zero, and that the speed of this decay is not slower than the inverse of the size. Our results are based on a new approach, which is unrelated to the convergence properties of the system. (C) 2013 Elsevier Ltd. All rights reserved.
We review the basic outline of the highly successful diffusion Monte Carlo technique commonly used in contexts ranging from electronic structure calculations to rare event simulation and data assimilation, and propose...
详细信息
We review the basic outline of the highly successful diffusion Monte Carlo technique commonly used in contexts ranging from electronic structure calculations to rare event simulation and data assimilation, and propose a new class of randomized iterative algorithms based on similar principles to address a variety of common tasks in numerical linear algebra. From the point of view of numerical linear algebra, the main novelty of the fast randomized iteration schemes described in this article is that they have dramatically reduced operations and storage cost per iteration (as low as constant under appropriate conditions) and are rather versatile: we will show how they apply to the solution of linear systems, eigenvalue problems, and matrix exponentiation, in dimensions far beyond the present limits of numerical linear algebra. While traditional iterative methods in numerical linear algebra were created in part to deal with instances where a matrix (of size O(n(2))) is too big to store, the algorithms that we propose are effective even in instances where the solution vector itself (of size O(n)) may be too big to store or manipulate. In fact, our work is motivated by recent diffusion Monte Carlo based quantum Monte Carlo schemes that have been applied to matrices as large as 10(108) x 10(108). We provide basic convergence results, discuss the dependence of these results on the dimension of the system, and demonstrate dramatic cost savings on a range of test problems.
The problem of document replacement in web caches has received much attention in recent research, and it has been shown that the eviction rule "replace the least recently used document" performs poorly in we...
详细信息
The problem of document replacement in web caches has received much attention in recent research, and it has been shown that the eviction rule "replace the least recently used document" performs poorly in web caches. Instead, it has been shown that using a combination of several criteria, such as the recentness and frequency of use, the size, and the cost of fetching a document, leads to a sizable improvement in hit rate and latency reduction. However, in order to implement these novel schemes, one needs to maintain complicated data structures. We propose randomized algorithms for, approximating any existing web-cache replacement scheme and thereby avoid the need for any data structures. At document-replacement times, the randomized algorithm samples N documents from the cache and replaces the least useful document from the sample, where usefulness is determined according to the criteria mentioned above. The next M < N least useful documents are retained for the succeeding iteration. When the next replacement is to be performed, the algorithm obtains N - M new samples from the cache and replaces the least useful document from the N - M new samples and the M previously retained. Using theory and simulations, we analyze the algorithm and find that it matches the performance of existing document replacement schemes for values of N and M as low as 8 and 2 respectively. Interestingly, we find. that retaining. a small number of samples from one iteration to the next leads to an exponential improvement in performance as compared to retaining no samples at all.
Neighbor discovery is a first step in the initialization of wireless networks in large-scale ad hoc networks. In this paper, we propose a randomized neighbor discovery scheme for wireless networks with a multi packet ...
详细信息
Neighbor discovery is a first step in the initialization of wireless networks in large-scale ad hoc networks. In this paper, we propose a randomized neighbor discovery scheme for wireless networks with a multi packet reception (MPR) capability. We let the nodes to have different advertisement probabilities. The node gradually adjusts its probability according to its operation phases: greedy or slow-start. In the greedy phase, the node advertises aggressively while it does moderately in the slow-start phase. Initial phase and advertisement probability are determined randomly. Then, the nodes change the probability adaptively according to advertisements the reception state from the other nodes. In order to decide the reception state precisely, the exact number of nodes in the network is necessary. To make our proposed scheme work in case of no prior knowledge of the population, we propose a population estimation method based on a maximum likelihood estimation. We evaluate our proposed scheme through numerical analysis and simulation. Through the numerical analysis, we show that the discovery completion time is lower bounded in Theta(N/k) and upper bounded in Theta(NlnN/k) when there exists N nodes with MPR-k capability. The bounds are the same as those of previous studies that propose static optimal advertisement probability. Through the simulation, we evaluate that our adaptive scheme outperforms in terms of discovery completion time, advertisement efficiency, and wasted time slot ratio than a scheme with static advertisement probability when the population of the network is unknown. (C) 2018 Elsevier Inc. All rights reserved.
In this paper, we study the Parameterized P-2-Packing problem and Parameterized Co-Path Packing problem from random perspective. For the Parameterized P-2-Packing problem, based on the structure analysis of the proble...
详细信息
In this paper, we study the Parameterized P-2-Packing problem and Parameterized Co-Path Packing problem from random perspective. For the Parameterized P-2-Packing problem, based on the structure analysis of the problem and using random partition technique, a randomized parameterized algorithm of running time O*(6.75(k)) is obtained, improving the current best result O*(8(k)). For the Parameterized Co-Path Packing problem, we firstly study the kernel and randomized algorithm for the degree-bounded instance, where each vertex in the instance has degree at most three. A kernel of size 20k and a randomized algorithm of running time O*(2(k)) are given for the Parameterized Co-Path Packing problem with bounded degree constraint. By applying iterative compression technique and based on the randomized algorithm for degree bounded problem, a randomized algorithm of running time O*(3(k)) is given for the Parameterized Co-Path Packing problem.
Decomposition of a matrix into low-rank matrices is a powerful tool for scientific computing and data analysis. The purpose is to obtain a low-rank matrix by decomposition of the original matrix into a product of smal...
详细信息
Decomposition of a matrix into low-rank matrices is a powerful tool for scientific computing and data analysis. The purpose is to obtain a low-rank matrix by decomposition of the original matrix into a product of smaller and lower-rank matrices or by randomly projecting the matrix down to a lower-dimensional space. Such decomposition requires less storage and computational burden. The focus of this paper is on randomized methods which try as much as possible to preserve the original matrix properties by applying the subspace sampling. In many applications, randomized algorithms in terms of accuracy, stability and speed are much better than the classical decomposition algorithms. In this study, we propose a sparse orthogonal transformation matrix to reduce the dimension of the data. The results show that compared with the most accurate methods, the transformation speed is much faster and can save a lot of memory in the case of huge matrices.
We present randomized algorithms based on block Krylov subspace methods for estimating the trace and log-determinant of Hermitian positive semi-definite matrices. Using the properties of Chebyshev polynomials and Gaus...
详细信息
We present randomized algorithms based on block Krylov subspace methods for estimating the trace and log-determinant of Hermitian positive semi-definite matrices. Using the properties of Chebyshev polynomials and Gaussian random matrix, we provide the error analysis of the proposed estimators and obtain the expectation and concentration error bounds. These bounds improve the corresponding ones given in the literature. Numerical experiments are presented to illustrate the performance of the algorithms and to test the error bounds.
One popular way to compute the CANDECOMP/PARAFAC (CP) decomposition of a tensor is to transform the problem into a sequence of overdetermined least squares subproblems with Khatri-Rao product (KRP) structure involving...
详细信息
One popular way to compute the CANDECOMP/PARAFAC (CP) decomposition of a tensor is to transform the problem into a sequence of overdetermined least squares subproblems with Khatri-Rao product (KRP) structure involving factor matrices. In this work, based on choosing the factor matrix randomly, we propose a mini-batch stochastic gradient descent method with importance sampling for those special least squares subproblems. Two different sampling strategies are provided. They can avoid forming the full KRP explicitly and computing the corresponding probabilities directly. The adaptive step size version of the method is also given. For the proposed method, we present its theoretical properties and comprehensive numerical performance. The results on synthetic and real data show that our method is effective and efficient, and for unevenly distributed data, it performs better than the corresponding one in the literature.
We show that the probability of the exceptional set decays exponentially for a broad class of randomized algorithms approximating solutions of ODEs, admitting a certain error decomposition. This class includes randomi...
详细信息
We show that the probability of the exceptional set decays exponentially for a broad class of randomized algorithms approximating solutions of ODEs, admitting a certain error decomposition. This class includes randomized explicit and implicit Euler schemes, and the randomized two-stage Runge-Kutta scheme (under inexact information). We design a confidence interval for the exact solution of an IVP and perform numerical experiments to illustrate the theoretical results.
The generalized singular value decomposition(GSVD)of two matrices with the same number of columns is a very useful tool in many practical ***,the GSVD may suffer from heavy computational time and memory requirement wh...
详细信息
The generalized singular value decomposition(GSVD)of two matrices with the same number of columns is a very useful tool in many practical ***,the GSVD may suffer from heavy computational time and memory requirement when the scale of the matrices is quite *** this paper,we use random projections to capture the most of the action of the matrices and propose randomized algorithms for computing a low-rank approximation of the *** error bounds of the approximation are also presented for the proposed randomized ***,some experimental results show that the proposed randomized algorithms can achieve a good accuracy with less computational cost and storage requirement.
暂无评论