We show that circuit lower bound proofs based on the method of random restrictions yield non-trivial compression algorithms for "easy" Boolean functions from the corresponding circuit classes. The compressio...
详细信息
We show that circuit lower bound proofs based on the method of random restrictions yield non-trivial compression algorithms for "easy" Boolean functions from the corresponding circuit classes. The compression problem is defined as follows: given the truth table of an n-variate Boolean function f computable by some unknown small circuit from a known class of circuits, find in deterministic time poly(2 (n) ) a circuit C (no restriction on the type of C) computing f so that the size of C is less than the trivial circuit size . We get non-trivial compression for functions computable by AC (0) circuits, (de Morgan) formulas, and (read-once) branching programs of the size for which the lower bounds for the corresponding circuit class are known. These compression algorithms rely on the structural characterizations of "easy" functions, which are useful both for proving circuit lower bounds and for designing "meta-algorithms" (such as Circuit-SAT). For (de Morgan) formulas, such structural characterization is provided by the "shrinkage under random restrictions" results by Subbotovskaya (Doklady Akademii Nauk SSSR 136(3):553-555, 1961) and HAyenstad (SIAM J Comput 27:48-64, 1998), strengthened to the "high-probability" version by Santhanam (Proceedings of the Fifty-First Annual IEEE Symposium on Foundations of Computer Science, pp 183-192, 2010), Impagliazzo, Meka & Zuckerman (Proceedings of the Fifty-Third Annual IEEE Symposium on Foundations of Computer Science, pp 111-119, 2012b), and Komargodski & Raz (Proceedings of the Forty-Fifth Annual ACM Symposium on Theory of Computing, pp 171-180, 2013). We give a new, simple proof of the "high-probability" version of the shrinkage result for (de Morgan) formulas, with improved parameters. We use this shrinkage result to get both compression and #SAT algorithms for (de Morgan) formulas of size about n (2). We also use this shrinkage result to get an alternative proof of the result by Komargodski & Raz (Proceedings of the Forty-Fi
Many fields of computational science advance through improvements in the algorithms used for solving key problems. These advancements are often facilitated by benchmarks and competitions that enable performance compar...
详细信息
Many fields of computational science advance through improvements in the algorithms used for solving key problems. These advancements are often facilitated by benchmarks and competitions that enable performance comparisons and rankings of solvers. Simultaneously, meta-algorithmic techniques, such as automated algorithm selection and configuration, enable performance improvements by utilizing the complementary strengths of different algorithms or configurable algorithm components. In fact, meta-algorithms have become major drivers in advancing the state of the art in solving many prominent computational problems. However, meta-algorithmic techniques are complex and difficult to use correctly, while their incorrect use may reduce their efficiency, or in extreme cases, even lead to performance losses. Here, we introduce the Sparkle platform, which aims to make meta-algorithmic techniques more accessible to nonexpert users, and to make these techniques more broadly available in the context of competitions, to further enable the assessment and advancement of the true state of the art in solving challenging computational problems. To achieve this, Sparkle implements standard protocols for algorithm selection and configuration that support easy and correct use of these techniques. Following an experiment, Sparkle generates a report containing results, problem instances, algorithms, and other relevant information, for convenient use in scientific publications.
We show that circuit lower bound proofs based on the method of random restrictions yield non-trivial compression algorithms for "easy" Boolean functions from the corresponding circuit classes. The compressio...
详细信息
ISBN:
(纸本)9781479936267
We show that circuit lower bound proofs based on the method of random restrictions yield non-trivial compression algorithms for "easy" Boolean functions from the corresponding circuit classes. The compression problem is defined as follows: given the truth table of an n-variate Boolean function f computable by some unknown small circuit from a known class of circuits, find in deterministic time poly(2(n)) a circuit C (no restriction on the type of C) computing f so that the size of C is less than the trivial circuit size 2(n)/n. We get non-trivial compression for functions computable by AC(0) circuits, (de Morgan) formulas, and (read-once) branching programs of the size for which the lower bounds for the corresponding circuit class are known. These compression algorithms rely on the structural characterizations of "easy" functions, which are useful both for proving circuit lower bounds and for designing "meta-algorithms" (such as Circuit-SAT). For (de Morgan) formulas, such structural characterization is provided by the "shrinkage under random restrictions" results [52], [21], strengthened to the "high-probability" version by [48], [26], [33]. We give a new, simple proof of the "high-probability" version of the shrinkage result for (de Morgan) formulas, with improved parameters. We use this shrinkage result to get both compression and #SAT algorithms for (de Morgan) formulas of size about n(2). We also use this shrinkage result to get an alternative proof of the recent result by Komargodski and Raz [33] of the average-case lower bound against small (de Morgan) formulas. Finally, we show that the existence of any non-trivial compression algorithm for a circuit class C subset of P/poly would imply the circuit lower bound NEXP not subset of C. This complements Williams's result [55] that any non-trivial Circuit-SAT algorithm for a circuit class C would imply a superpolynomial lower bound against C for a language in NEXP1.
Due to mainstream adoption of cloud computing and its rapidly increasing usage of energy, the efficient management of cloud computing resources has become an important issue. A key challenge in managing the resources ...
详细信息
Due to mainstream adoption of cloud computing and its rapidly increasing usage of energy, the efficient management of cloud computing resources has become an important issue. A key challenge in managing the resources lies in the volatility of their demand. While there have been a wide variety of online algorithms (e.g. Receding Horizon Control, Online Balanced Descent) designed, it is hard for cloud operators to pick the right algorithm. In particular, these algorithms vary greatly on their usage of predictions and performance guarantees. This paper aims at studying an automatic algorithm selection scheme in real time. To do this, we empirically study the prediction errors from real-world cloud computing traces. Results show that prediction errors are distinct from different prediction algorithms, across virtual machines, and over the time horizon. Based on these observations, we propose a simple prediction error model and prove upper bounds on the dynamic regret of several online algorithms. We then apply the empirical and theoretical results to create a simple online meta-algorithm that chooses the best algorithm on the fly. Numerical simulations demonstrate that the performance of the designed policy is close to that of the best algorithm in hindsight.
Several different control methods are used in practice or have been proposed to cost-effectively provision IT resources. Due to the dependency of many control methods on having accurate predictions of the future to ma...
详细信息
ISBN:
(纸本)9781450366786
Several different control methods are used in practice or have been proposed to cost-effectively provision IT resources. Due to the dependency of many control methods on having accurate predictions of the future to make good provisioning decisions, there has been a great deal of literature on prediction workload demand. However, even with all of this literature on workload predictions and their utilization in control algorithms, the understanding of prediction error and how to handle it remains an important open issue and research *** this paper we aim to mend this gap by making the following contributions: (i) Prediction error is modeled to aid in proving worst-case dynamic regret bounds for control algorithms. (ii) Upper bounds on dynamic regret are proven for a variety of algorithms in terms of the prediction error model. In order to choose which algorithm to run without prediction error knowledge, a simple online meta-algorithm is designed. (iii) A detailed analysis of prediction accuracy is done for cloud computing by fitting real-world CPU utilization traces of Azure virtual machines to popular prediction models. (iv) Using real-world trace based simulations of CPU allocation for virtual machines, the proposed meta-algorithm is shown to outperform a popular algorithm selection policy and perform very closely to that of the best algorithm chosen in hindsight.
The best algorithm for a computational problem generally depends on the "relevant inputs," a concept that depends on the application domain and often defies formal articulation. While there is a large body o...
详细信息
The best algorithm for a computational problem generally depends on the "relevant inputs," a concept that depends on the application domain and often defies formal articulation. While there is a large body of literature on empirical approaches to selecting the best algorithm for a given application domain, there has been surprisingly little theoretical analysis of the problem. This paper adapts concepts from statistical and online learning theory to reason about application-specific algorithm selection. Our models capture several state-of-the-art empirical and theoretical approaches to the problem, ranging from self-improving algorithms to empirical performance models, and our results identify conditions under which these approaches are guaranteed to perform well. We present one framework that models algorithm selection as a statistical learning problem, and our work here shows that dimension notions from statistical learning theory, historically used to measure the complexity of classes of binary-and real-valued functions, are relevant in a much broader algorithmic context. We also study the online version of the algorithm selection problem, and give possibility and impossibility results for the existence of no-regret learning algorithms.
The best algorithm for a computational problem generally depends on the "relevant inputs," a concept that depends on the application domain and often defies formal articulation. While there is a large litera...
详细信息
ISBN:
(纸本)9781450340571
The best algorithm for a computational problem generally depends on the "relevant inputs," a concept that depends on the application domain and often defies formal articulation. While there is a large literature on empirical approaches to selecting the best algorithm for a given application domain, there has been surprisingly little theoretical analysis of the problem. This paper adapts concepts from statistical and online learning theory to reason about application-specific algorithm selection. Our models capture several state-of-theart empirical and theoretical approaches to the problem, ranging from self-improving algorithms to empirical performance models, and our results identify conditions under which these approaches are guaranteed to perform well. We present one framework that models algorithm selection as a statistical learning problem, and our work here shows that dimension notions from statistical learning theory, historically used to measure the complexity of classes of binary-and real-valued functions, are relevant in a much broader algorithmic context. We also study the online version of the algorithm selection problem, and give possibility and impossibility results for the existence of no-regret learning algorithms.
Business process management (BPM) enables the documentation and standardization of business processes, increasing efficiency and quality in their execution. A business process can be represented graphically by Busines...
详细信息
Business process management (BPM) enables the documentation and standardization of business processes, increasing efficiency and quality in their execution. A business process can be represented graphically by Business Process Model and Notation – BPMN, which is an OMG (Object Management Group) standard for process modeling. BPMN provides an extensive set of modeling elements, such as activities, events and gateways, which enables the representation of a wide variety of business processes. It presents high expression power that captures both temporal and logical relations between activities, data object and resources. However, considering the BPMN specification, there exists a lack of conformity between the conceptual BPMN elements definition and their respective codification (in XML format). For example, it is not expressed in the XML Message Flow element any definition to connect only elements from different pools, as described in the conceptual element definition The main goal of this work is to develop a logic that enables to express the rules described in the conceptual element definition, for each notational element. In this work, meta-algorithm is the term used to refer to this logic. Techniques of Software Testing, such as decision tables and graph coverage were used to check expressiveness of the proposed meta-algorithms. In order to verify the acceptance of users, a survey was applied, to verify the acceptance with users. As result, meta-algorithms had accepted by 73.33% participants. As main contribution this work provides a more adherent logic, compared to conceptual element definition, as well an evidence that users can have an increasing of understanding, like verified in the survey. ...
We show that three subclasses of bounded treewidth graphs are well quasi ordered by refinements of the minor order. Specifically, we prove that graphs with bounded vertex cover are well quasi ordered by the induced su...
详细信息
We show that three subclasses of bounded treewidth graphs are well quasi ordered by refinements of the minor order. Specifically, we prove that graphs with bounded vertex cover are well quasi ordered by the induced subgraph order, graphs with bounded feedback vertex set are well quasi ordered by the topological-minor order, and graphs with bounded circumference are well quasi ordered by the induced minor order. Our results give algorithms for recognizing any graph family in these classes which is closed under the corresponding minor order refinement.
暂无评论