We recall that optimal condensing of nearest neighbor data requires the construction of the Delaunay triangulation of the training set. We argue that, from the viewpoint of computational complexity, an iterative appro...
详细信息
We recall that optimal condensing of nearest neighbor data requires the construction of the Delaunay triangulation of the training set. We argue that, from the viewpoint of computational complexity, an iterative approach using a dynamic triangulation is most desirable. We describe two algorithms, INSERT and DELETE, which permit to maintain a dynamic Delaunay triangulation.
In this work, we consider a graded mesh refinement algorithm for solving time-delayed parabolic partial differential equations with a small diffusion parameter. The presence of this parameter leads to boundary layer p...
详细信息
In this work, we consider a graded mesh refinement algorithm for solving time-delayed parabolic partial differential equations with a small diffusion parameter. The presence of this parameter leads to boundary layer phenomena. These problems are also known as singularly perturbed problems. For these problems, it is well-known that one cannot achieve a convergent solution to maintain the boundary layer dynamics, on a fixed number of uniform meshes irrespective of the arbitrary magnitude of perturbation parameter. Here, we consider an adaptive graded mesh generation algorithm, which is based on an entropy function in conjunction with the classical difference schemes, to resolve the layer behavior. The advantage of the present algorithm is that it does not require to have any information about the location of the layer. Several examples are presented to show the high performance of the proposed algorithm.
Synthetic pattern generation procedures have various applications, and a number of approaches (fractals, L-systems, etc.) have been devised. A fundamental underlying question is: will new pattern generation algorithms...
详细信息
Synthetic pattern generation procedures have various applications, and a number of approaches (fractals, L-systems, etc.) have been devised. A fundamental underlying question is: will new pattern generation algorithms continue to be invented, or is there some "universal'' algorithm that can generate all (and only) the perceptually distinguishable images, or even all members of a restricted class of patterns such as logos or letterforms? In fact there are many complete algorithms that can generate all possible images, but most images are random and not perceptually distinguishable. Counting arguments show that the percentage of distinguishable images that will be generated by such complete algorithms is vanishingly small. In this paper we observe that perceptually distinguishable images are compressible. Using this observation it is evident that algorithmic complexity provides an appropriate framework for discussing the question of a universal image generator. We propose a natural thesis for describing perceptually distinguishable images and argue its validity. Based on it, we show that there is no program that generates all (and only) these images. Although this is an abstract result, it may have importance for graphics and other fields that deal with compressible signals. In essence, new representations and pattern generation algorithms will continue to be developed;there is no feasible "super algorithm'' that is capable of all things. (C) 2011 Elsevier Inc. All rights reserved.
The stock market is a canonical example of a complex system, in which a large number of interacting agents lead to joint evolution of stock returns and the collective market behavior exhibits emergent properties. Howe...
详细信息
The stock market is a canonical example of a complex system, in which a large number of interacting agents lead to joint evolution of stock returns and the collective market behavior exhibits emergent properties. However, quantifying complexity in stock market data is a challenging task. In this report, we explore four different measures for characterizing the intrinsic complexity by evaluating the structural relationships between stock returns. The first two measures are based on linear and non-linear co-movement structures (accounting for contemporaneous and Granger causal relationships), the third is based on algorithmic complexity, and the fourth is based on spectral analysis of interacting dynamical systems. Our analysis of a dataset comprising daily prices of a large number of stocks in the complete historical data of NASDAQ (1972-2018) shows that the third and fourth measures are able to identify the greatest global economic downturn in 2007-09 and associated spillovers substantially more accurately than the first two measures. We conclude this report with a discussion of the implications of such quantification methods for risk management in complex systems.
The aim of this paper is to undertake an experimental investigation of the trade-offs between program-size and time computational complexity. The investigation includes an exhaustive exploration and systematic study o...
详细信息
The aim of this paper is to undertake an experimental investigation of the trade-offs between program-size and time computational complexity. The investigation includes an exhaustive exploration and systematic study of the functions computed by the set of all 2-color Turing machines with 2 states -we will write (2,2)- and 3 states -we write (3,2)- with particular attention to the runtimes, space-usages and patterns corresponding to the computed functions when the machines have access to larger resources (more states). We report that the average runtime of Turing machines computing a function increases -with a probability close to one- with the number of states, indicating that machines not terminating (almost) immediately tend to occupy all the resources at hand. We calculated all time complexity classes to which the algorithms computing the functions found in both (2,2) and (3,2) belong to, and made comparison among these classes. For a selection of functions the comparison is extended to (4,2). Our study revealed various structures in the micro-cosmos of small Turing Machines. Most notably we observed "phase-transitions" in the halting-probability distribution. Moreover, it is observed that small initial segments fully define a function computed by a TM.
A major problem for evolutionary theory is understanding the so-called open-ended nature of evolutionary change, from its definition to its origins. Open-ended evolution (OEE) refers to the unbounded increase in compl...
详细信息
A major problem for evolutionary theory is understanding the so-called open-ended nature of evolutionary change, from its definition to its origins. Open-ended evolution (OEE) refers to the unbounded increase in complexity that seems to characterize evolution on multiple scales. This property seems to be a characteristic feature of biological and technological evolution and is strongly tied to the generative potential associated with combinatorics, which allows the system to grow and expand their available state spaces. Interestingly, many complex systems presumably displaying OEE, from language to proteins, share a common statistical property: the presence of Zipf's Law. Given an inventory of basic items (such as words or protein domains) required to build more complex structures (sentences or proteins) Zipf's Law tells us that most of these elements are rare whereas a few of them are extremely common. Using algorithmic information theory, in this paper we provide a fundamental definition for open-endedness, which can be understood as postulates. Its statistical counterpart, based on standard Shannon information theory, has the structure of a variational problem which is shown to lead to Zipf's Law as the expected consequence of an evolutionary process displaying OEE. We further explore the problem of information conservation through an OEE process and we conclude that statistical information (standard Shannon information) is not conserved, resulting in the paradoxical situation in which the increase of information content has the effect of erasing itself. We prove that this paradox is solved if we consider non-statistical forms of information. This last result implies that standard information theory may not be a suitable theoretical framework to explore the persistence and increase of the information content in OEE systems.
We present a new quasi-polynomial algorithm for solving parity games. It is based on a new bisimulation invariant measure of complexity for parity games, called the register-index, which captures the complexity of the...
详细信息
ISBN:
(纸本)9781450355834
We present a new quasi-polynomial algorithm for solving parity games. It is based on a new bisimulation invariant measure of complexity for parity games, called the register-index, which captures the complexity of the priority assignment. For fixed parameter k, the class of games with register-index bounded by k is solvable in polynomial time. We show that the register-index of parity games of size n is bounded by O(log n) and derive a quasi-polynomial algorithm. Finally, we give a descriptive complexity account of the quasi-polynomial complexity of parity games: The winning regions of parity games with p priorities and register-index k are described by a modal mu formula of which the complexity, as measured by its alternation depth, depends on k rather than p.
We prove that a computable ordinal α is autostable relative to strong constructivizations if and only if α ω+1. We obtain an estimate of the algorithmic complexity for the class of strongly constructivizable linear...
详细信息
With the push to exascale, in situ visualization and analysis will continue to play an important role in high performance computing. Tightly coupling in situ visualization with simulations constrains resources for bot...
详细信息
ISBN:
(纸本)9781467388153
With the push to exascale, in situ visualization and analysis will continue to play an important role in high performance computing. Tightly coupling in situ visualization with simulations constrains resources for both, and these constraints force a complex balance of trade-offs. A performance model that provides an a priori answer for the cost of using an in situ approach for a given task would assist in managing the trade-offs between simulation and visualization resources. In this work, we present new statistical performance models, based on algorithmic complexity, that accurately predict the run-time cost of a set of representative rendering algorithms, an essential in situ visualization task. To train and validate the models, we conduct a performance study of an MPI+ X rendering infrastructure used in situ with three HPC simulation applications. We then explore feasibility issues using the model for selected in situ rendering questions.
The computational complexity of the following type of problems is studied. Given a topological layout (i.e., a drawing in the plane) of a graph, does it contain a noncrossing subgraph of a given type? It is conjecture...
详细信息
The computational complexity of the following type of problems is studied. Given a topological layout (i.e., a drawing in the plane) of a graph, does it contain a noncrossing subgraph of a given type? It is conjectured that such problems are always NP-hard (provided planar subgraphs are looked for) regardless of the complexity of their nonplanar versions. This conjecture is verified for several cases in a very strong sense. In particular, it is shown that deciding the existence of a noncrossing path connecting two given vertices in a given topological layout of a 3-regular subgraph, as well as deciding the existence of a noncrossing cycle in such a layout are NP-complete problems. It is also proved that deciding the existence of a noncrossing k-factor in a topological layout of a (k + 1)-regular graph is NP-complete for k = 2, 3, 4, 5. For k = 1, this question is NP-complete in layouts of 3-regular graphs, while it is polynomial solvable for layouts of graphs with maximum degree two.
暂无评论