We present three randomized pseudo-polynomial algorithms for the problem of finding a base of specified value in a weighted represented matroid subject to parity conditions. These algorithms, the first two being an im...
详细信息
We present three randomized pseudo-polynomial algorithms for the problem of finding a base of specified value in a weighted represented matroid subject to parity conditions. These algorithms, the first two being an improved version of those presented by P. M. Camerini et al. (1992, J. algorithms 13, 258-273) use fast arithmetic working over a finite field chosen at random among a set of appropriate fields. We show that the choice of a best algorithm among those presented depends on a conjecture related to the best value of the so-called Linnik constant concerning the distribution of prime numbers in arithmetic progressions. This conjecture, which we call the C-conjecture, is a strengthened version of a conjecture formulated in 1934 by S. Chowla. If the C-conjecture is true, the choice of a best algorithm is simple, since the last algorithm exhibits the best performance;either when the performance is measured in arithmetic operations, or when it is measured in bit operations and mild assumptions hold. If the C-conjecture is false we are still able to identify a best algorithm, but in this case the choice is between the first two algorithms and depends on the asymptotic growth of m with respect to those of U and n, where 2n, 2m, U are the rank, the number of elements;and the maximum weight assigned to the elements of the matroid, respectively. (C) 1999 Academic Press.
We describe randomized algorithms for computing the dominant eigenmodes of the generalized Hermitian eigenvalue problem Ax = Bx, with A Hermitian and B Hermitian and positive definite. The algorithms we describe only ...
详细信息
We describe randomized algorithms for computing the dominant eigenmodes of the generalized Hermitian eigenvalue problem Ax = Bx, with A Hermitian and B Hermitian and positive definite. The algorithms we describe only require forming operations Ax,Bx and B(-1)x and avoid forming square roots of B (or operations of the form, B(1/2)x or B(-1/2)x). We provide a convergence analysis and a posteriori error bounds and derive some new results that provide insight into the accuracy of the eigenvalue calculations. The error analysis shows that the randomized algorithm is most accurate when the generalized singular values of B(-1)A decay rapidly. A randomized algorithm for the generalized singular value decomposition is also provided. Finally, we demonstrate the performance of our algorithm on computing an approximation to the Karhunen-Loeve expansion, which involves a computationally intensive generalized Hermitian eigenvalue problem with rapidly decaying eigenvalues. Copyright (c) 2015 John Wiley & Sons, Ltd.
randomized algorithms provide a powerful tool for scientific computing. Compared with standard deterministic algorithms, randomized algorithms are often faster and robust. The main purpose of this paper is to design a...
详细信息
randomized algorithms provide a powerful tool for scientific computing. Compared with standard deterministic algorithms, randomized algorithms are often faster and robust. The main purpose of this paper is to design adaptive randomized algorithms for computing the approximate tensor decompositions. We give an adaptive randomized algorithm for the computation of a low multilinear rank approximation of the tensors with unknown multilinear rank and analyze its probabilistic error bound under certain assumptions. Finally, we design an adaptive randomized algorithm for computing the tensor train approximations of the tensors. Based on the bounds about the singular values of sub-Gaussian matrices with independent columns or independent rows, we analyze these randomized algorithms. We illustrate our adaptive randomized algorithms via several numerical examples.
Polynomial-time randomized algorithms were constructed to approximately solve optimal robust performance controller design problems in probabilistic sense and the rigorous mathematical justification of the approach wa...
详细信息
Polynomial-time randomized algorithms were constructed to approximately solve optimal robust performance controller design problems in probabilistic sense and the rigorous mathematical justification of the approach was given. The randomized algorithms here were based on a property from statistical learning theory known as (uniform) convergence of empirical means (UCEM). It is argued that in order to assess the performance of a controller as the plant varies over a pre-specified family, it is better to use the average performance of the controller as the objective function to be optimized, rather than its worst-case performance. The approach is illustrated to be efficient through an example.
We show that the original classic randomized algorithms for approximate counting in NP-hard problems, like for counting the number of satisfiability assignments in a SAT problem, counting the number of feasible colori...
详细信息
We show that the original classic randomized algorithms for approximate counting in NP-hard problems, like for counting the number of satisfiability assignments in a SAT problem, counting the number of feasible colorings in a graph and calculating the permanent, typically fail. They either do not converge at all or are heavily biased (converge to a local extremum). Exceptions are convex counting problems, like estimating the volume of a convex polytope. We also show how their performance could be dramatically improved by combining them with the classic splitting method, which is based on simulating simultaneously multiple Markov chains. We present several algorithms of the combined version, which we simple call the splitting algorithms. We show that the most advance splitting version coincides with the cloning algorithm suggested earlier by the author. As compared to the randomized algorithms, the proposed splitting algorithms require very little warm-up time while running the MCMC from iteration to iteration, since the underlying Markov chains are already in steady-state from the beginning. What required is only fine tuning, i.e. keeping the Markov chains in steady-state while moving from iteration to iteration. We present extensive simulation studies with both the splitting and randomized algorithms for different NP-hard counting problems.
Many applications in data science and scientific computing involve large-scale datasets that are expensive to store and manipulate. However, these datasets possess inherent multidimensional structure that can be explo...
详细信息
Many applications in data science and scientific computing involve large-scale datasets that are expensive to store and manipulate. However, these datasets possess inherent multidimensional structure that can be exploited to compress and store the dataset in an appropriate tensor format. In recent years, randomized matrix methods have been used to efficiently and accurately compute low-rank matrix decompositions. Motivated by this success, we develop randomized algorithms for tensor decompositions in the Tucker representation. Specifically, we present randomized versions of two well-known compression algorithms, namely, HOSVD and STHOSVD, and a detailed probabilistic analysis of the error in using both algorithms. We also develop variants of these algorithms that tackle specific challenges posed by large-scale datasets. The first variant adaptively finds a low-rank representation satisfying a given tolerance, and it is beneficial when the target rank is not known in advance. The second variant preserves the structure of the original tensor and is beneficial for large sparse tensors that are difficult to load in memory. We consider several different datasets for our numerical experiments: synthetic test tensors and realistic applications such as the compression of facial image samples in the Olivetti database and word counts in the Enron email dataset.
randomized algorithms are a useful tool for analyzing the performance of complex uncertain systems. Their implementation requires the generation of a large number N of random samples representing the uncertainty scena...
详细信息
ISBN:
(纸本)9781424438723
randomized algorithms are a useful tool for analyzing the performance of complex uncertain systems. Their implementation requires the generation of a large number N of random samples representing the uncertainty scenarios, and the corresponding evaluation of system performance. When N is very large and/ or performance evaluation is costly or time consuming, it can be necessary to distribute the computational burden of such algorithms among many cooperating computing units. This paper studies distributed versions of randomized algorithms for expected value and probability estimation over a network of computing nodes with possibly time-varying communication links. Explicit a-priori bounds are provided for the sample and communication complexity of these algorithms in terms of number of local samples, number of computing nodes and communication iterations.
Singular value decomposition (SVD) is a key step in many algorithms in statistics, machine learning and numerical linear algebra. While classical singular value decomposition has been made efficient in terms of comput...
详细信息
ISBN:
(纸本)9781665444378
Singular value decomposition (SVD) is a key step in many algorithms in statistics, machine learning and numerical linear algebra. While classical singular value decomposition has been made efficient in terms of computational complexity, classical algorithms are not able to fully utilise modern computing environments. The goal of this work is to survey various implementations and applications of randomized algorithms for SVD. algorithms are compared in terms of accuracy and time of execution. On example of robust principal component analysis (RPCA), it is shown that using randomized algorithms can yield a significant speedup for image processing and similar applications.
Distributed Object Oriented (DOO) applications have been developed for solving complex problems in various scientific fields. One of the most important aspects of the DOO systems is the efficient distribution of softw...
详细信息
ISBN:
(纸本)9781424435548
Distributed Object Oriented (DOO) applications have been developed for solving complex problems in various scientific fields. One of the most important aspects of the DOO systems is the efficient distribution of software classes among different nodes in order to solve the mismatch problem that may appear when the software structure does not match up the available hardware organization. We have proposed a multistep approach for restructuring DOO software. According to this approach, the OO system is partitioned into clusters that are then merged into lager groups forming what we call Merged Cluster Graph. The last step in this approach is concerned by mapping these merged clusters onto the target distributed architecture. Generally, the mapping problem is intractable thus allowing only for efficient heuristics. This paper presents two algorithms to solve the mapping problem using a randomized approach. The proposed algorithms has proved to be efficient, Simple and easy to understand and implement. Furthermore, the performance of the proposed algorithms was tested against some existing deterministic techniques. The experimental results showed an outstanding performance of the proposed algorithms in minimizing the overall mapping cost of the produced assignments.
Data-driven control strategies for dynamical systems with unknown parameters are popular in theory and applications. An essential problem is to prevent stochastic linear systems becoming destabilized, due to the uncer...
详细信息
ISBN:
(纸本)9781728107080
Data-driven control strategies for dynamical systems with unknown parameters are popular in theory and applications. An essential problem is to prevent stochastic linear systems becoming destabilized, due to the uncertainty of the decision-maker about the dynamical parameter. Two randomized algorithms are proposed for this problem, but the performance is not sufficiently investigated. Further, the effect of key parameters of the algorithms such as the magnitude and the frequency of applying the randomizations is not currently available. This work studies the stabilization speed and the failure probability of data-driven procedures. We provide numerical analyses for the performance of two methods: stochastic feedback, and stochastic parameter. The presented results imply that as long as the number of statistically independent randomizations is not too small, fast stabilization is guaranteed.
暂无评论