It was mentioned by Kolmogorov (1968. IEEE Trans. Inform. Theory 14. 662-664) that the properties of algorithmic complexity and Shannon entropy are similar. We investigate one aspect of this similarity. Namely, we are...
详细信息
It was mentioned by Kolmogorov (1968. IEEE Trans. Inform. Theory 14. 662-664) that the properties of algorithmic complexity and Shannon entropy are similar. We investigate one aspect of this similarity. Namely, we are interested in linear inequalities that are valid for Shannon entropy and for Kolmogorov complexity. It turns out that (1) all linear inequalities that are valid for Kolmogorov complexity are also valid for Shannon entropy and vice versa: (2) all linear inequalities that are valid for Shannon entropy are valid for ranks of finite subsets of linear spaces: (3) the opposite statement is not true: Ingleton's inequality (1971, "Combinatorial Mathematics and Its Applications." pp. 149-167. Academic Press, San Diego) is valid for ranks but not for Shannon entropy;(4) for some special cases all three classes of inequalities coincide and have simple description. We present an inequality for Kolmogorov complexity that implies Ingleton's inequality for ranks;another application of this inequality is a new simple proof of one of Gacs-Korner's results on common information (1973. Problems Control Inform. Theory 2, 149-162). (C) 2000 Academic Press.
We define the notion of rational presentation of a complete metric space, in order to study metric spaces from the algorithmic complexity point of view. In this setting, we study some representations of the space C[0,...
详细信息
We define the notion of rational presentation of a complete metric space, in order to study metric spaces from the algorithmic complexity point of view. In this setting, we study some representations of the space C[0, 1] of uniformly continuous real functions over [0, 1] with the usual norm: \\f\\(infinity) = Sup{\f(x)\;0 less than or equal to x less than or equal to 1}. This allows us to have a comparison of global kind between complexity notions attached to these presentations. In particular, we get a generalization of Hoover's results concerning the Weierstrass approximation theorem in polynomial time. We get also a generalization of previous results on analytic functions which are computable in polynomial time. (C) 2001 Elsevier Science B.V. All rights reserved.
P4 is a widely used Domain-specific Language for Programmable Data Planes. A critical step in P4 compilation is finding a feasible and efficient mapping of the high-level P4 source code constructs to the physical reso...
详细信息
P4 is a widely used Domain-specific Language for Programmable Data Planes. A critical step in P4 compilation is finding a feasible and efficient mapping of the high-level P4 source code constructs to the physical resources exposed by the underlying hardware, while meeting data and control flow dependencies in the program. In this paper, we take a new look at the algorithmic aspects of this problem, with the motivation to understand the fundamental theoretical limits and obtain better P4 pipeline embeddings, and to speed up practical P4 compilation times for RMT and dRMT target architectures. We report mixed results: we find that P4 compilation is computationally hard even in a severely relaxed formulation, and there is no polynomial-time approximation of arbitrary precision (unless =), while the good news is that, despite its inherent complexity, P4 compilation is approximable in linear time with a small constant bound even for the most complex, nearly real-life models.
Network inference is a rapidly advancing field, with new methods being proposed on a regular basis. Understanding the advantages and limitations of different network inference methods is key to their effective applica...
详细信息
Network inference is a rapidly advancing field, with new methods being proposed on a regular basis. Understanding the advantages and limitations of different network inference methods is key to their effective application in different circumstances. The common structural properties shared by diverse networks naturally pose a challenge when it comes to devising accurate inference methods, but surprisingly, there is a paucity of comparison and evaluation methods. Historically, every new methodology has only been tested against gold standard (true values) purpose-designed synthetic and real-world (validated) biological networks. In this paper we aim to assess the impact of taking into consideration aspects of topological and information content in the evaluation of the final accuracy of an inference procedure. Specifically, we will compare the best inference methods, in both graph-theoretic and information theoretic terms, for preserving topological properties and the original information content of synthetic and biological networks. New methods for performance comparison are introduced by borrowing ideas from gene set enrichment analysis and by applying concepts from algorithmic complexity. Experimental results show that no individual algorithm outperforms all others in all cases, and that the challenging and non-trivial nature of network inference is evident in the struggle of some of the algorithms to turn in a performance that is superior to random guesswork. Therefore special care should be taken to suit the method to the purpose at hand. Finally, we show that evaluations from data generated using different underlying topologies have different signatures that can be used to better choose a network reconstruction method. (C) 2016 Elsevier Ltd. All rights reserved.
The concept of complexity as considered in terms of its algorithmic definition proposed by G. J. Chaitin and A. N. Kolmogorov is revisited for the dynamical complexity of music. When music pieces are cast in the form ...
详细信息
The concept of complexity as considered in terms of its algorithmic definition proposed by G. J. Chaitin and A. N. Kolmogorov is revisited for the dynamical complexity of music. When music pieces are cast in the form of time series of pitch variations, concepts of dynamical systems theory can be used to de. ne new quantities such as the dimensionality as a measure of the global temporal dynamics of a music piece, and the Shanon entropy as an evaluation of its local dynamics. When these quantities are computed explicitly for sequences sampled in the music literature from the 18th to the 20th century, no indication is found of a systematic increase in complexity paralleling historically the evolution of classical western music, but the analysis suggests that the fractional nature of art might have an intrinsic value of more general significance.
We derive complexity estimates for two classes of deterministic networks: the Boolean networks S(B-n,B-m), which compute the Boolean vector-functions B-n,B-m, and the classes of graphs G(VPm,l, E), with overlapping co...
详细信息
We derive complexity estimates for two classes of deterministic networks: the Boolean networks S(B-n,B-m), which compute the Boolean vector-functions B-n,B-m, and the classes of graphs G(VPm,l, E), with overlapping communities and high density. The latter objects are well suited for the synthesis of resilience networks. For the Boolean vector-functions, we propose a synthesis of networks on a NOT, AND, and OR logical basis and unreliable channels such that the computation of any Boolean vector-function is carried out with polynomial information cost. All vertexes of the graphs G(V-Pm,V-l, E) are labeled by the trinomial (m(2) +/- l, m)-partitions from the set of partitions P-m,P-l. It turns out that such labeling makes it possible to create networks of optimal algorithmic complexity with highly predictable parameters. Numerical simulations of simple graphs for trinomial (m(2) +/- l, m)-partition families (m = 3, 4, ..., 9) allow for the exact estimation of all commonly known topological parameters for the graphs. In addition, a new topological parameter-overlapping index-is proposed. The estimation of this index offers an explanation for the maximal density value for the clique graphs G(VPm,l, E).
We present a method for estimating the complexity of an image based on Bennett's concept of logical depth. Bennett identified logical depth as the appropriate measure of organized complexity, and hence as being be...
详细信息
We present a method for estimating the complexity of an image based on Bennett's concept of logical depth. Bennett identified logical depth as the appropriate measure of organized complexity, and hence as being better suited to the evaluation of the complexity of objects in the physical world. Its use results in a different, and in some sense a finer characterization than is obtained through the application of the concept of Kolmogorov complexity alone. We use this measure to classify images by their information content. The method provides a means for classifying and evaluating the complexity of objects by way of their visual representations. To the authors' knowledge, the method and application inspired by the concept of logical depth presented herein are being proposed and implemented for the first time. (C) 2011 Wiley Periodicals, Inc. complexity, 2011
Some Godel centenary reflections on whether incompleteness is really serious, and whether mathematics should be done somewhat differently, based on using algorithmic complexity measured in bits of information.
Some Godel centenary reflections on whether incompleteness is really serious, and whether mathematics should be done somewhat differently, based on using algorithmic complexity measured in bits of information.
The Lambek calculus with the unit can be defined as the atomic theory (algebraic logic) of the class of residuated monoids. This calculus, being a theory of a broader class of algebras than Heyting ones, is weaker tha...
详细信息
The Lambek calculus with the unit can be defined as the atomic theory (algebraic logic) of the class of residuated monoids. This calculus, being a theory of a broader class of algebras than Heyting ones, is weaker than intuitionistic logic. Namely, it lacks structural rules: permutation, contraction, and weakening. We consider two extensions of the Lambek calculus with modalities-the exponential, under which all structural rules are permitted, and the relevant modality, under which only permutation and contraction rules are allowed. The Lambek calculus with a relevant modality is used in mathematical linguistics. Both calculi are algorithmically undecidable. We consider their fragments in which the modality is allowed to be applied to just formulas of Horn depth not greater than 1. We prove that these fragments are decidable and belong to the NP class. To show this, in the case of a relevant modality, we introduce a new notion of Script capital R-total derivability in context-free grammars, i.e., existence of a derivation in which each rule is used at least a given number of times. It is stated that the Script capital R-totality problem is NP-hard for context-free grammars. Also we pinpoint algorithmic complexity of Script capital R-total derivability for more general classes of generative grammars.
We present a general framework for analyzing the complexity of subdivision-based algorithms whose tests are based on the sizes of regions and their distance to certain sets (often varieties) intrinsic to the problem u...
详细信息
We present a general framework for analyzing the complexity of subdivision-based algorithms whose tests are based on the sizes of regions and their distance to certain sets (often varieties) intrinsic to the problem under study. We call such tests diameter-distance tests. We illustrate that diameter-distance tests are common in the literature by proving that many interval arithmetic-based tests are, in fact, diameter-distance tests. For this class of algorithms, we provide both non-adaptive bounds for the complexity, based on separation bounds, as well as adaptive bounds, by applying the framework of continuous amortization. Using this structure, we provide the first complexity analysis for the algorithm by Plantinga and Vegeter for approximating real implicit curves and surfaces. We present both adaptive and nonadaptive a prioriworst-case bounds on the complexity of this algorithm both in terms of the number of subregions constructed and in terms of the bit complexity for the construction. Finally, we construct families of hypersurfaces to prove that our bounds are tight. (C) 2019 Elsevier Ltd. All rights reserved.
暂无评论