作者:
Suzuki, HATR
Human Informat Proc Res Labs Seika Kyoto 6190288 Japan
A novel machine language genetic programming system that uses one-dimensional core memories is proposed and simulated. The core is compared to a biochemical reaction space, and in imitation of biological molecules, fo...
详细信息
A novel machine language genetic programming system that uses one-dimensional core memories is proposed and simulated. The core is compared to a biochemical reaction space, and in imitation of biological molecules, four types of data words (Membrane, Pure data, Operator, and Instruction) are prepared in the core. A program is represented by a sequence of Instructions. During execution of the sore, Instructions are transcribed into corresponding Operators, and Operators modify, create, or transfer Pure data. The core is hierarchically partitioned into sections by the Membrane data, and the data transfer between sections by special channel Operators constitutes a tree data-flow structure among sections in the core. In the experiment, genetic algorithms are used to modify program information. A simple machine learning problem is prepared for the environment data set of the creatures (programs), and the fitness value of a creature is calculated from the Pure data excreted by the creature, Breeding of programs that can output the predefined answer is successfully carried out. Several future plans to extend this system are also discussed.
Gold's [1967. Language identification in the limit. Information and Control, 16, 447-474] celebrated work on learning in the limit has been taken, by many cognitive scientists, to have powerful negative implicatio...
详细信息
Gold's [1967. Language identification in the limit. Information and Control, 16, 447-474] celebrated work on learning in the limit has been taken, by many cognitive scientists, to have powerful negative implications for the learnability of language from positive data (i.e., from mere exposure to linguistic input). This provides one, of several, lines of argument that language acquisition must draw on other sources of information, including innate constraints on learning. We consider an 'ideal learner' that applies a Simplicity Principle to the problem of language acquisition. The Simplicity Principle chooses the hypothesis that provides the briefest representation of the available data-here, the data are the linguistic input to the child. The Simplicity Principle allows learning from positive evidence alone, given quite weak assumptions, in apparent contrast to results on language learnability in the limit (e.g., Gold, 1967). These results provide a framework for reconsidering the learnability of various aspects of natural language from positive evidence, which has been at the center of theoretical debate in research on language acquisition and linguistics. (c) 2006 Elsevier Inc. All rights reserved.
In the present paper, the issue of the optimal solving severely ill-posed problems is studied. The authors proposed projection scheme of discretization which is efficient with respect to the amount of used Galerkin in...
详细信息
In the present paper, the issue of the optimal solving severely ill-posed problems is studied. The authors proposed projection scheme of discretization which is efficient with respect to the amount of used Galerkin information. By this scheme, the order-optimal estimates of quantities described information and algorithmic complexity are achieved.
The important requirements are stated for the success of quantum computation. These requirements involve coherent preserving Hamiltonians as well as exact integrability of the corresponding Feynman path integrals. Als...
详细信息
The important requirements are stated for the success of quantum computation. These requirements involve coherent preserving Hamiltonians as well as exact integrability of the corresponding Feynman path integrals. Also we explain the role of metric entropy in dynamical evolutionary system and outline some of the open problems in the design of quantum computational systems. Finally, we observe that unless Ne understand quantum nondemolition measurements, quantum integrability, quantum chaos and the direction of time arrow, the quantum control and computational paradigms will remain elusive and the design of systems based on quantum dynamical evolution may not be feasible.
Searching for motifs in graphs has become a crucial problem in the analysis of biological networks. In the context of metabolic network analysis, Lacroix et al. [V. Lacroix, C.G. Fernandes, M.-F. Sagot, IEEE/ACM Trans...
详细信息
Searching for motifs in graphs has become a crucial problem in the analysis of biological networks. In the context of metabolic network analysis, Lacroix et al. [V. Lacroix, C.G. Fernandes, M.-F. Sagot, IEEE/ACM Transactions on Computational Biology and Bioinformatics 3 (4) (2006) 360-368] introduced the NP-hard general problem of finding occurrences of motifs in vertex-colored graphs, where a motif M is a multiset of colors and an occurrence of M in a vertex-colored graph G, called the target graph, is a subset of vertices that induces a connected graph and the multiset of colors induced by this subset is exactly the motif. Pursuing the line of research pioneered by Lacroix et al. and aiming at dealing with approximate solutions, we consider in this paper the above-mentioned problem in two of its natural optimization forms, referred hereafter as the Min-CC and the Maximum Motif problems. The Min-CC problem seeks for an occurrence of a motif M in a vertex-colored graph G that induces a minimum number of connected components whereas the Maximum Motif problem is concerned with finding a maximum cardinality submotif M subset of M that occurs as a connected motif in G. We prove the Min-CC problem to be APX-hard even in the extremal case where the motif is a set and the target graph is a path. We complement this result by giving a polynomialtime algorithm in case the motif is built upon a fixed number of colors and the target graph is a path. Also, extending [M. Fellows, G. Fertin, D. Hermelin, S. Vialette, in: Proc. 34th International Colloquium on Automata, Languages and Programming (ICALP), Lecture Notes in Computer Science, vol. 4596, Springer, 2007, pp. 340-351], we prove the Min-CC problem to be fixed-parameter tractable when parameterized by the size of the motif, and we give a faster algorithm in case the target graph is a tree. Furthermore, we prove the MIN-CC problem for trees not to be approximable within ratio c logn for some constant c > 0, where n is the or
In this paper we have studied the performance of chaotically encrypted rate-1/n convolutional encoders. The design goal is to achieve a crypto-coding system that possesses the error correction and encryption features....
详细信息
In this paper we have studied the performance of chaotically encrypted rate-1/n convolutional encoders. The design goal is to achieve a crypto-coding system that possesses the error correction and encryption features. The proposed system is able to achieve reasonable error correction performance for the proposed configuration with a certain encoder rate and constraint length. The performance of each system was determined by calculating the algorithmic complexity of the system outputs. complexity of each decoding algorithm is compared to the results of the alternative algorithm. Numerical evidence indicates that algorithmic complexity associated with particular rate-1/n convolutional encoders increases as constraint length increases, while error correcting capacity of the decoder expands. Both code design and constraint length are control architectural parameters that have significant effect on system performance.
We analyze the apparent increase in entropy in the course of the spin-echo effect using algorithmic information theory. We show that although the state of the spins quickly becomes algorithmically complex, then simple...
详细信息
We analyze the apparent increase in entropy in the course of the spin-echo effect using algorithmic information theory. We show that although the state of the spins quickly becomes algorithmically complex, then simple again during the echo, the overall complexity of spins together with the magnetic field grows slowly, as the logarithm of the elapsed time. This slow increase in complexity is reflected in an increased difficulty in taking advantage of the echo pulse. Our discussion illustrates the fundamental role of algorithmic information content in the formulation of statistical physics, including the second law of thermo-dynamics, from the viewpoint of the observer.
Lenstra-Lenstra-Lovasz (LLL) is an effective receiving algorithm for Multiple-Input-Multiple-Output (MIMO) systems, which is believed can achieve full diversity in MIMO detection of fading channels. However, the LLL a...
详细信息
Lenstra-Lenstra-Lovasz (LLL) is an effective receiving algorithm for Multiple-Input-Multiple-Output (MIMO) systems, which is believed can achieve full diversity in MIMO detection of fading channels. However, the LLL algorithm features polynomial complexity and shows poor performance in terms of convergence. The reduction of algorithmic complexity and the acceleration of convergence are key problems in optimizing the LLL algorithm. In this paper, a variant of the LLL algorithm, the Hybrid-Fix-and-Round LLL algorithm, which combines both fix and round measurements in the size reduction procedure, is proposed. By utilizing fix operation, the algorithmic procedure is altered and the size reduction procedure is skipped by the hybrid algorithm with significantly higher probability. As a consequence, the simulation results reveal that the Hybrid-Fix-and-Round-LLL algorithm carries a faster rate of convergence compared to the original LLL algorithm, and its algorithmic complexity is at most one order lower than original LLL algorithm in real field. Comparing to other families of LLL algorithm, Hybrid-Fix-and-Round-LLL algorithm can make a better compromise in performance and algorithmic complexity.
A pair of variables that tend to rise and fall either together or in opposition are said to be monotonically associated. For certain phenomena, this tendency is causally restricted to a subpopulation, as, e.g., the se...
详细信息
A pair of variables that tend to rise and fall either together or in opposition are said to be monotonically associated. For certain phenomena, this tendency is causally restricted to a subpopulation, as, e.g., the severity of an allergic reaction trending with the concentration of an air pollutant. Previously, Yu et al. (Stat Methodol 2011, 8:97-111) devised a method of rearranging observations to test paired data to see if such an association might be present in a subpopulation. However, the computational intensity of the method limited its application to relatively small samples of data, and the test itself only judges if association is present in some subpopulation;it does not clearly identify the subsample that came from this subpopulation, especially when the whole sample tests positive. The present study adds a 'top-K' feature (Sampath S, Verducci JS. Stat Anal Data Min 2013, 6:458-471) based on a multistage ranking model, that identifies a concise subsample that is likely to contain a high proportion of observations from the subpopulation in which the association is supported. Computational improvements incorporated into this top-K tau-path algorithm now allow the method to be extended to thousands of pairs of variables measured on sample sizes in the thousands. A description of the new algorithm along with measures of computational complexity and practical efficiency help to gauge its potential use in different settings. Simulation studies catalog its accuracy in various settings, and an example from finance illustrates its step-by-step use. (C) 2016 Wiley Periodicals, Inc.
We describe a new algorithm for finding the convex hull of any simple polygon specified by a sequence of m vertices. An earlier convex hull finder of ours is limited to polygons which remain simple (i.e., nonselfinter...
详细信息
We describe a new algorithm for finding the convex hull of any simple polygon specified by a sequence of m vertices. An earlier convex hull finder of ours is limited to polygons which remain simple (i.e., nonselfintersecting) when locally non-convex vertices are removed. In this paper we amend our earlier algorithm so that it finds with complexity O(m) the convex hull of any simple polygon, while retaining much of the simplicity of the earlier algorithm.
暂无评论