A speed up technique for the non-local means (NLM) image denoising algorithm based on probabilistic early termination ( PET) is proposed. A significant amount of computation in the NLM scheme is dedicated to the disto...
详细信息
A speed up technique for the non-local means (NLM) image denoising algorithm based on probabilistic early termination ( PET) is proposed. A significant amount of computation in the NLM scheme is dedicated to the distortion calculation between pixel neighborhoods. The proposed PET scheme adopts a probability model to achieve early termination. Specifically, the distortion computation can be terminated and the corresponding contributing pixel can be rejected earlier, if the expected distortion value is too high to be of significance in weighted averaging. Performance comparative with several fast NLM schemes is provided to demonstrate the effectiveness of the proposed algorithm.
In the forthcoming paper of Beltran and Pardo, the average complexity of linear homotopy methods to solve polynomial equations with random initial input (in a sense to be described below) was proven to be finite, and ...
详细信息
In the forthcoming paper of Beltran and Pardo, the average complexity of linear homotopy methods to solve polynomial equations with random initial input (in a sense to be described below) was proven to be finite, and even polynomial in the size of the input. In this paper, we prove that some other higher moments are also finite. In particular, we show that the variance is polynomial in the size of the input.
Quantiles play an important role in data analysis. On-line estimation of quantiles for streaming data-i.e. data arriving step by step over time-especially with devices with limited memory and computation capacity like...
详细信息
Quantiles play an important role in data analysis. On-line estimation of quantiles for streaming data-i.e. data arriving step by step over time-especially with devices with limited memory and computation capacity like electronic control units is not as simple as incremental or recursive estimation of characteristics like the mean (expected value) or the variance. In this paper, we propose an algorithm for incremental quantile estimation that overcomes restrictions of previously described techniques. We also develop a statistical test for our algorithm to detect changes, so that the on-line estimation of the quantiles can be carried out in an adaptive or evolving manner. Besides a statistical analysis of our algorithm, we also provide experimental results comparing our algorithm with a recursive quantile estimation technique which is restricted to continuous random variables.
When plasma is used to spray coatings on a mould surface, the states of the plasma jet and powder particles have great influence on the quality of the spray layer. In order to find a simple and efficient simulation fo...
详细信息
When plasma is used to spray coatings on a mould surface, the states of the plasma jet and powder particles have great influence on the quality of the spray layer. In order to find a simple and efficient simulation for the states of the plasma jet and powder particles, an attempt of modeling was made by hexagonal 7-bit Lattice Boltzmann method (LBM) and a probabilistic algorithm in this paper. The velocity and temperature field of the plasma jet and particles are calculated and compared with those obtained by previous models and measurements. The dynamic moving process, accelerating course, maximal speed and distribution of powder particles are calculated by the present model. The optimal spraying distance and deposition efficiency under different spraying conditions are also acquired. It can be concluded that the combined model is competent for numerical simulation of the atmospheric plasma spraying. Simulation by LBM is simpler and faster than that by traditional methods. LBM is also a computational method that offers flexibility and outstanding capacity of parallel computation. (c) 2006 Elsevier B.V. All rights reserved.
The quantum algorithm for factorization is probably the most famous one in quantum computation. The algorithm succeeds only when some random number with an even order relative to the number to be factorized is fed as ...
详细信息
The quantum algorithm for factorization is probably the most famous one in quantum computation. The algorithm succeeds only when some random number with an even order relative to the number to be factorized is fed as the input to the quantum order finding algorithm. It is well-known that numbers with even orders are found with probability not less than 1/2. In consequence, quantum device has to be used many times in the course of the factorization process to amplify the success probability. However, the above-mentioned limit is a rough estimate. Presented theoretical analysis and numerical simulation prove that the probability of finding a parameter with an even order is significantly higher for many composite numbers. It immediately follows that so far, presented analyses of factorization efficiency are highly underestimated in terms of required number of quantum device usage.
This paper aims to provide a practical implementation of a probabilistic cipher by extending on the algorithms by Fuchsbauer, Goldwasser and Micali. We provide details on designing and implementing the cipher and furt...
详细信息
ISBN:
(纸本)9781424452446
This paper aims to provide a practical implementation of a probabilistic cipher by extending on the algorithms by Fuchsbauer, Goldwasser and Micali. We provide details on designing and implementing the cipher and further support our understanding by providing a statistical analysis of our implementation for the key generation, encryption, and decryption times taken by the cipher for key sizes of 1024, 2048, and 4096 bits for varying message spaces of 750, 1500, 3000, and 5000 bits. The concept of 'inter-bit operating time' is introduced for the cipher which calculates time elapsed between two instances of an operation. We show the working of a probabilistic cipher purely from a practical standpoint to justify if its original algorithm is practically implementable.
Electroencephalogram (EEG) recordings of brain waves have been shown to have unique pattern for each individual and thus have potential for biometric applications. In this paper, we propose an EEG feature extraction a...
详细信息
ISBN:
(纸本)9781424423538
Electroencephalogram (EEG) recordings of brain waves have been shown to have unique pattern for each individual and thus have potential for biometric applications. In this paper, we propose an EEG feature extraction and hashing approach for person authentication. Multi-variate autoregressive (mAR) coefficients are extracted as features from multiple EEG channels and then hashed by using our recently proposed Fast Johnson-Lindenstrauss Transform (FJLT)-based hashing algorithm to obtain compact hash vectors. Based on the EEG hash vectors, a Naive Bayes probabilistic model is employed for person authentication. Our EEG hashing approach presents a fundamental departure from existing methods in EEG-biometry study. The promising results suggest that hashing may open new research directions and applications in the emerging EEG-based biometry area.
This paper proposes a polynomial-time probabilistic approach to solve the observability problem of sampled-data piecewise affine systems. First, an algebraic characterization for the system to be observable is derived...
详细信息
This paper proposes a polynomial-time probabilistic approach to solve the observability problem of sampled-data piecewise affine systems. First, an algebraic characterization for the system to be observable is derived. Next, based on the characterization, we propose a randomized algorithm that can determine if the system is observable in a probabilistic sense or the system is not observable in a detemimistic sense. Finally, it is shown with some examples, for which it is hopeless to check the observability in a deterministic way, that the proposed algorithm is very useful. (c) 2007 Elsevier B.V. All rights reserved.
The increased availability of data describing biological interactions provides important clues on how complex chains of genes and proteins interact with each other. Most previous approaches either restrict their atten...
详细信息
The increased availability of data describing biological interactions provides important clues on how complex chains of genes and proteins interact with each other. Most previous approaches either restrict their attention to analyzing simple substructures such as paths or trees in these graphs, or use heuristics that do not provide performance guarantees when general substructures are analyzed. We investigate a formulation to model pathway structures directly and give a probabilistic algorithm to find an optimal path structure in O(4(k)n(2t) k(t+ log( t+ 1)+ 2.92)t(2)) time and O(n(t) k log k + m) space, where n and m are respectively the number of vertices and the number of edges in the given network, k is the number of vertices in the path structure, and t is the maximum number of vertices ( i. e., " width") at each level of the structure. Even for the case t = 1 which corresponds to finding simple paths of length k, our time complexity 4(k)n(O(1)) is a significant improvement over previous probabilistic approaches. To allow for the analysis of multiple pathway structures, we further consider a variant of the algorithm that provides probabilistic guarantees for the top suboptimal path structures with a slight increase in time and space. We show that our algorithm can identify pathway structures with high sensitivity by applying it to protein interaction networks in the DIP database.
Recently Schoning has shown that a simple local-search algorithm for 3SAT achieves the currently best upper bound, i.e., an expected time of 1.334(n). In this paper, we show that this algorithm can be modified to run ...
详细信息
Recently Schoning has shown that a simple local-search algorithm for 3SAT achieves the currently best upper bound, i.e., an expected time of 1.334(n). In this paper, we show that this algorithm can be modified to run much faster if there is some kind of imbalance in satisfying assignments and we have a (partial) knowledge about that. Especially if a satisfying assignment has imbalanced 0's and 1's, i.e., p(1)n 1's and (1-p(1))n 0's, then we can find a solution in time 1.260(n) when p(1) = 71 and 1.072(n) when p(1) = 0.1. Such an imbalance often exists in SAT instances reduced from other problems. As a concrete example, we investigate a reduction from 3DM and show our new approach is nontrivially faster than its direct algorithms. Preliminary experimental results are also given. (c) 2006 Elsevier B.V. All rights reserved.
暂无评论