The progressive loss of functional capacity due to aging is a serious problem that can compromise human locomotion capacity, requiring the help of an assistant and reducing independence. The NanoStim project aims to d...
详细信息
The progressive loss of functional capacity due to aging is a serious problem that can compromise human locomotion capacity, requiring the help of an assistant and reducing independence. The NanoStim project aims to develop a system capable of performing treatment with electrostimulation at the patient's home, reducing the number of consultations. The knee angle is one of the essential attributes in this context, helping understand the patient's movement during the treatment session. This article presents a wearable system that recognizes the knee angle through IMU sensors. The hardware chosen for the wearables are low cost, including an ESP32 microcontroller and an MPU-6050 sensor. However, this hardware impairs signal accuracy in the multitasking environment expected in rehabilitation treatment. Three optimization filters with algorithmic complexity O(1) were tested to improve the signal's noise. The complementary filter obtained the best result, presenting an average error of 0.6 degrees and an improvement of 77% in MSE. Furthermore, an interface in the mobile app was developed to respond immediately to the recognized movement. The systems were tested with volunteers in a real environment and could successfully measure the movement performed. In the future, it is planned to use the recognized angle with the electromyography sensor.
It is a well-known fact that binary sequences (strings) of high algorithmic complexity can be taken as good approximations of statistically independent random sequences with two equiprobable outputs. Here “sequence o...
详细信息
It is a well-known fact that binary sequences (strings) of high algorithmic complexity can be taken as good approximations of statistically independent random sequences with two equiprobable outputs. Here “sequence of high algorithmic complexity” is such one, that the length of the shortest programgenerating this sequence by a universal Turing machine differs only by an a priori given constant from the length of the generated sequence. The present paper generalizes this result to the case of a finite (not necessarily binary) alphabet. Considering an infinite sequence of finite sequences of high algorithmic complexity over a finite alphabet, the relative frequency of occurences of each letter or finite string of letters is proved to tend to the inverted value of the total number of letters, or strings of letters of the given length, in question. This result may be seen as an analogy to the strong law of large numbers in the case of equiprobable probability distribution.
Given two comparative maps, that is two sequences of markers each representing a genome, the Maximal Strip Recovery problem (MSR) asks to extract a largest sequence of markers from each map such that the two extracted...
详细信息
Given two comparative maps, that is two sequences of markers each representing a genome, the Maximal Strip Recovery problem (MSR) asks to extract a largest sequence of markers from each map such that the two extracted sequences are decomposable into non-intersecting strips (or synteny blocks). This aims at defining a robust set of synteny blocks between different species, which is a key to understand the evolution process since their last common ancestor. In this paper, we add a fundamental constraint to the initial problem, which expresses the biologically sustained need to bound the number of intermediate (non-selected) markers between two consecutive markers in a strip. We therefore introduce the problem delta-gap-MSR, where delta is a (usually small) non-negative integer that upper bounds the number of non-selected markers between two consecutive markers in a strip. We show that, if we restrict ourselves to comparative maps without duplicates, the problem is polynomial for delta = 0, NP-complete for delta = 1, and APX-hard for delta >= 2. For comparative maps with duplicates, the problem is APX-hard for all delta >= 0. (C) 2012 Elsevier B. V. All rights reserved.
Recent tremendous development of quantum information theory has led to a number of quantum technological projects, e.g. quantum random generators. This development had stimulated a new wave of interest in quantum foun...
详细信息
Recent tremendous development of quantum information theory has led to a number of quantum technological projects, e.g. quantum random generators. This development had stimulated a new wave of interest in quantum foundations. One of the most intriguing problems of quantum foundations is the elaboration of a consistent and commonly accepted interpretation of a quantum state. Closely related problem is the clarification of the notion of quantum randomness and its interrelation with classical randomness. In this short review, we shall discuss basics of classical theory of randomness (which by itself is very complex and characterized by diversity of approaches) and compare it with irreducible quantum randomness. We also discuss briefly "digital philosophy", its role in physics (classical and quantum) and its coupling to the information interpretation of quantum mechanics (QM).
In this paper we have studied the performance of rate-1/2 convolutional encoders with adaptive states developed in chaotic and hyperchaotic regions. These states are generated by varying the control parameters in a fe...
详细信息
In this paper we have studied the performance of rate-1/2 convolutional encoders with adaptive states developed in chaotic and hyperchaotic regions. These states are generated by varying the control parameters in a feedback-controlled system. Several sets of closed-loop simulations are performed to demonstrate the benefit of information-based chaos system. In particular, it is demonstrated that two varieties of an information-based systems provide improved performance over all the encoder choices when hyperchaos states are utilized. Special attention was paid to the algorithmic complexity of the systems for an entire class of rate-1/2 encoders. The decoder is able to recover the encrypted data and is able to reasonably estimate the bit error rate for different signal strengths under a noisy AWGN channel. This indicates that the encoder can update the information map in real time to compensate for changing data for both chaotic and hyperchaotic states. This is the evidence that occasional changes in the data stream can be handled by the decoder in a real time application. Numerical evidence indicates algorithmic complexity associated with the hyperchaotic-encrypted and convolutionally-encoded data, provide better security along with the increase in the error correcting capacity of the decoder.
A multi-cut rearrangement of a string S is a string S ' obtained from S by an operation called k-cut rearrangement, that consists of (1) cutting S at a given number k of places in S, making S the concatenated stri...
详细信息
A multi-cut rearrangement of a string S is a string S ' obtained from S by an operation called k-cut rearrangement, that consists of (1) cutting S at a given number k of places in S, making S the concatenated string X-1 center dot X-2 center dot X-3 center dot ... center dot X-k center dot Xk+1, where X-1 and Xk+1 are possibly empty, and (2) rearranging the Xis so as to obtain S '=X-pi(1)center dot X-pi(2)center dot X-pi(3)center dot ... center dot X pi(k+1), pi being a permutation on 1,2, horizontal ellipsis ,k+1 satisfying pi(1)=1 and pi(k+1)=k+1. Given two strings S and T built on the same multiset of characters from an alphabet sigma, the Sorting by Multi-Cut Rearrangements (SMCR) problem asks whether a given number l of k-cut rearrangements suffices to transform S into T. The SMCR problem generalizes several classical genomic rearrangements problems, such as Sorting by Transpositions and Sorting by Block Interchanges. It may also model chromoanagenesis, a recently discovered phenomenon consisting of massive simultaneous rearrangements. In this paper, we study the SMCR problem from an algorithmic complexity viewpoint. More precisely, we investigate its classical and parameterized complexity, as well as its approximability, in the general case or when S and T are permutations.
Using the mathematical background for algorithmic complexity developed by Kolmogorov in the sixties, Cilibrasi and Vitanyi have designed a similarity distance named normalized compression distance applicable to the cl...
详细信息
Using the mathematical background for algorithmic complexity developed by Kolmogorov in the sixties, Cilibrasi and Vitanyi have designed a similarity distance named normalized compression distance applicable to the clustering of objects of any kind, such as music, texts or gene sequences. The normalized compression distance is a quasi-universal normalized admissible distance under certain conditions. This paper shows that the compressors used to compute the normalized compression distance are not idempotent in some cases, being strongly skewed with the size of the objects and window size, and therefore causing a deviation in the identity property of the distance if we don't take care that the objects to be compressed fit the windows. The relationship underlying the precision of the distance and the size of the objects has been analyzed for several well-known compressors, and specially in depth for three cases, bzip2, gzip and PPMZ which are examples of the three main types of compressors: block-sorting, Lempel-Ziv, and statistic.
The problem of determining the points in a program at which variables are “live” (will be used again) is introduced and discussed. Two solutions, one which uses a simple iterative algorithm and one which uses an alg...
详细信息
The problem of determining the points in a program at which variables are “live” (will be used again) is introduced and discussed. Two solutions, one which uses a simple iterative algorithm and one which uses an algorithm based on “Cocke–Allen interval” analysis, are presented and analyzed. These algorithms are compared on “self replicating“ families of reducible program flow graphs. The results are inconclusive in that the interval method requires fewer bit-vector steps on some graphs and more on others. If n is the number of nodes in a program flow graph and the number of edges is linearly proportional to n, then both algorithms require O(n
optimization of compiled code
compiler
flow graph reducibility
interval analysis
live variables
algorithmic complexity
There are many examples of failed strategies whose intention is to optimize a process but instead they produce worse results than no strategy at all. Many fall under the loose umbrella of the "no free lunch theor...
详细信息
There are many examples of failed strategies whose intention is to optimize a process but instead they produce worse results than no strategy at all. Many fall under the loose umbrella of the "no free lunch theorem". In this paper we present an example in which a simple (but assumedly naive) strategy intended to shorten proof lengths in the propositional calculus produces results that are significantly worse than those achieved without any method to try to shorten *** contrast with what was to be expected intuitively, namely no improvement in the length of the proofs. Another surprising result is how early the naive strategy failed. We set up a experiment in which we sample random classical propositional theorems and then feed them to two very popular automatic theorem provers (AProS and Prover9). We then compared the length of the proofs obtained under two methods: (1) the application of the theorem provers with no additional information;(2) the addition of new (redundant) axioms to the provers. The second method produced even longer proofs than the first one.
Electroencephalography (EEG) as an example of electrophysiological monitoring methods has a rather long history of successful application for the diagnosis and treatment of diseases, and this success would not have be...
详细信息
Electroencephalography (EEG) as an example of electrophysiological monitoring methods has a rather long history of successful application for the diagnosis and treatment of diseases, and this success would not have been possible without effective methods of mathematical, and more recently, computer analysis. Most of these methods are based on statistics. Among the methods of EEG analysis, there is a group of meth-ods that use different versions of Shannon's entropy estimation as a "main component" and that do not differ significantly from traditional statistical approaches. Despite the external similarity, another approach is to use the Kolmogorov-Chaitin definition of complexity and the con-cepts of algorithmic information dynamics. The algorithmic dynamics toolbox includes techniques (e.g., block decomposition method) that appear to be applicable to EEG analysis. The current paper is an attempt to use the block decomposition method along with the recent addition to the management of EEG data provided by machine learn-ing, with the ultimate goal of making this data more useful to researchers and medical practitioners.
暂无评论