Determination of development priority of information system subsystems is a problem that warrants resolution during information system development. It has been proven, previously, that this problem of information syst...
详细信息
ISBN:
(纸本)9781509025435
Determination of development priority of information system subsystems is a problem that warrants resolution during information system development. It has been proven, previously, that this problem of information system development order is in fact NP-complete, NP-hard, and APX-hard. To solve this problem on a general case we have previously developed Monte-Carlo randomized algorithm, calculated complexity of this algorithm, and so on. After previous research we were able to come into possession of digraphs that represent real-world information systems. Therefore, in this paper we will empirically analyze Monte-Carlo algorithm to determine how the algorithm works on real-world examples. Also, we will critically review the results and give some possible areas of future research as well.
The Kaczmarz and Gauss-Seidel methods both solve a linear system X beta - y by iteratively refining the solution estimate. Recent interest in these methods has been sparked by a proof of Strohmer and Vershynin which s...
详细信息
The Kaczmarz and Gauss-Seidel methods both solve a linear system X beta - y by iteratively refining the solution estimate. Recent interest in these methods has been sparked by a proof of Strohmer and Vershynin which shows the randomized Kaczmarz method converges linearly in expectation to the solution. Lewis and Leventhal then proved a similar result for the randomized Gauss-Seidel algorithm. However, the behavior of both methods depends heavily on whether the system is underdetermined or overdetermined, and whether it is consistent or not. Here we provide a unified theory of both methods, their variants for these different settings, and draw connections between both approaches. In doing so, we also provide a proof that an extended version of randomized Gauss-Seidel converges linearly to the least norm solution in the underdetermined case (where the usual randomized Gauss-Seidel fails to converge). We detail analytically and empirically the convergence properties of both methods and their extended variants in all possible system settings. With this result, a complete and rigorous theory of both methods is furnished.
We show that the original classic randomized algorithms for approximate counting in NP-hard problems, like for counting the number of satisfiability assignments in a SAT problem, counting the number of feasible colori...
详细信息
We show that the original classic randomized algorithms for approximate counting in NP-hard problems, like for counting the number of satisfiability assignments in a SAT problem, counting the number of feasible colorings in a graph and calculating the permanent, typically fail. They either do not converge at all or are heavily biased (converge to a local extremum). Exceptions are convex counting problems, like estimating the volume of a convex polytope. We also show how their performance could be dramatically improved by combining them with the classic splitting method, which is based on simulating simultaneously multiple Markov chains. We present several algorithms of the combined version, which we simple call the splitting algorithms. We show that the most advance splitting version coincides with the cloning algorithm suggested earlier by the author. As compared to the randomized algorithms, the proposed splitting algorithms require very little warm-up time while running the MCMC from iteration to iteration, since the underlying Markov chains are already in steady-state from the beginning. What required is only fine tuning, i.e. keeping the Markov chains in steady-state while moving from iteration to iteration. We present extensive simulation studies with both the splitting and randomized algorithms for different NP-hard counting problems.
In this note we compare the randomized extended Kaczmarz (EK) algorithm and randomized coordinate descent (CD) for solving the full-rank overdetermined linear least-squares problem and prove that CD needs fewer operat...
详细信息
In this note we compare the randomized extended Kaczmarz (EK) algorithm and randomized coordinate descent (CD) for solving the full-rank overdetermined linear least-squares problem and prove that CD needs fewer operations for satisfying the same residual-related termination criteria. For the general least-squares problems, we show that first running CD to compute the residual and then standard Kaczmarz on the resulting consistent system is more efficient than EK.
We analyze the general version of the classic guessing game Mastermind with n positions and k colors. Since the case k 0 a constant, is well understood, we concentrate on larger numbers of colors. For the most promin...
详细信息
We analyze the general version of the classic guessing game Mastermind with n positions and k colors. Since the case k <= n(1-epsilon), e > 0 a constant, is well understood, we concentrate on larger numbers of colors. For the most prominent case k = n, our results imply that Codebreaker can find the secret code with O(n log log n) guesses. This bound is valid also when only black answer pegs are used. It improves the O(n log n) bound first proven by Chvatal. We also show that if both black and white answer pegs are used, then the O(n log log n) bound holds for up to n(2) log log n colors. These bounds are almost tight, as the known lower bound of ohm(n) shows. Unlike for k <= n(1-epsilon), simply guessing at random until the secret code is determined is not sufficient. In fact, we show that an optimal nonadaptive strategy (deterministic or randomized) needs circle minus(n log n) guesses.
We present a new randomized diffusion-based algorithm for balancing indivisible tasks (tokens) on a network. Our aim is to minimize the discrepancy between the maximum and minimum load. The algorithm works as follows....
详细信息
We present a new randomized diffusion-based algorithm for balancing indivisible tasks (tokens) on a network. Our aim is to minimize the discrepancy between the maximum and minimum load. The algorithm works as follows. Every vertex distributes its tokens as evenly as possible among its neighbors and itself. If this is not possible without splitting some tokens, the vertex redistributes its excess tokens among all its neighbors randomly (without replacement). In this paper we prove several upper bounds on the load discrepancy for general networks. These bounds depend on some expansion properties of the network, that is, the second largest eigenvalue, and a novel measure which we refer to as refined local divergence. We then apply these general bounds to obtain results for some specific networks. For constant-degree expanders and torus graphs, these yield exponential improvements on the discrepancy bounds. For. hypercubes we obtain a polynomial improvement. (c) 2014 Elsevier Inc. All rights reserved.
Using Jerabek's framework for probabilistic reasoning, we formalize the correctness of two fundamental RNC2 algorithms for bipartite perfect matching within the theory VPV for polytime reasoning. The first algorit...
详细信息
ISBN:
(纸本)9780769544120
Using Jerabek's framework for probabilistic reasoning, we formalize the correctness of two fundamental RNC2 algorithms for bipartite perfect matching within the theory VPV for polytime reasoning. The first algorithm is for testing if a bipartite graph has a perfect matching, and is based on the Schwartz-Zippel Lemma for polynomial identity testing applied to the Edmonds polynomial of the graph. The second algorithm, due to Mulmuley, Vazirani and Vazirani, is for finding a perfect matching, where the key ingredient of this algorithm is the Isolating Lemma.
We study the numerical integration problem for functions with infinitely many variables. The function spaces of integrands we consider are weighted reproducing kernel Hilbert spaces with norms related to the ANOVA dec...
详细信息
We study the numerical integration problem for functions with infinitely many variables. The function spaces of integrands we consider are weighted reproducing kernel Hilbert spaces with norms related to the ANOVA decomposition of the integrands. The weights model the relative importance of different groups of variables. We investigate randomized quadrature algorithms and measure their quality by estimating the randomized worst-case integration error. In particular, we provide lower error bounds for a very general class of randomized algorithms that includes non-linear and adaptive algorithms. Furthermore, we propose new randomized changing dimension algorithms (also called multivariate decomposition methods) and present favorable upper error bounds. For product weights and finite-intersection weights our lower and upper error bounds match and show that our changing dimension algorithms are optimal in the sense that they achieve convergence rates arbitrarily close to the best possible convergence rate. As more specific examples, we discuss unanchored Sobolev spaces of different degrees of smoothness and randomized changing dimension algorithms that use as building blocks interlaced scrambled polynomial lattice rules. Crown Copyright (C) 2014 Published by Elsevier Inc. All rights reserved.
The random priority (RP) mechanism is a popular way to allocate n objects to n agents with strict ordinal preferences over the objects. In the RP mechanism, an ordering over the agents is selected uniformly at random;...
详细信息
The random priority (RP) mechanism is a popular way to allocate n objects to n agents with strict ordinal preferences over the objects. In the RP mechanism, an ordering over the agents is selected uniformly at random;the first agent is then allocated his most-preferred object, the second agent is allocated his most-preferred object among the remaining ones, and so on. The outcome of the mechanism is a bi-stochastic matrix in which entry (i, a) represents the probability that agent i is given object a. It is shown that the problem of computing the RP allocation matrix is #P-complete. Furthermore, it is NP-complete to decide if a given agent i receives a given object a with positive probability under the RP mechanism, whereas it is possible to decide in polynomial time whether or not agent i receives object a with probability 1. The implications of these results for approximating the RP allocation matrix as well as on finding constrained Pareto optimal matchings are discussed.
An efficient and robust FETI-2 lambda domain decomposition method (DDM) framework for electromagnetic (EM) modeling is outlined. The proposed framework uses randomized algorithms to approximate the low-rank discrete D...
详细信息
ISBN:
(纸本)9781479978151
An efficient and robust FETI-2 lambda domain decomposition method (DDM) framework for electromagnetic (EM) modeling is outlined. The proposed framework uses randomized algorithms to approximate the low-rank discrete Dirichlet-to-Neumann (DtN) map interations that arise in FETI-2 lambda. The resulting approach is also combined with effective and robust local and global DDM preconditioners. A realistic numerical example is given to verify the effectiveness, effieincy and robustness.
暂无评论