probabilistic algorithms to evaluate result reliability in qualitative chromatographic analysis are discussed in the paper. The elementary uncertainty (P 0), concerned with a single test (comparison of sample and refe...
详细信息
probabilistic algorithms to evaluate result reliability in qualitative chromatographic analysis are discussed in the paper. The elementary uncertainty (P 0), concerned with a single test (comparison of sample and reference peak positions), is treated as the sum of misidentification and omission probabilities. Both constituents are calculated separately using the simplified model and Laplace functions. In the model, the main reasons for elementary uncertainties are random normally distributed deviations during retention characteristic measurement. algorithms to calculate both constituents of P 0 have to take into account real measurement precision, supposed composition of the sample, content of the database, chosen coincidence criterion and other factors. At a high selectivity of retention, the 3σ value is recommended as the most convenient coincidence criterion. It leads to more reliable and unambiguous attribution of peaks in the chromatogram. For cases that are more complicated, the probabilistic algorithms based upon Bernoulli theorem are proposed to calculate the summary uncertainty of identification, concerned with the multiple test. They take into account P 0 value, the number of repeated single tests (n) in the similar or different conditions, and chosen identification criterion K (minimal number of coincidences). The above-mentioned algorithms lead to a priori optimisation of the mode of operation of any identification software system associated with the chromatograph. They can be useful during a metrological validation of corresponding qualitative analysis methods.
Proximity searches become very difficult on "high dimensional" metric spaces, that is, those whose histogram of distances has a large mean and/or a small variance. This so-called "curse of dimensionalit...
详细信息
Proximity searches become very difficult on "high dimensional" metric spaces, that is, those whose histogram of distances has a large mean and/or a small variance. This so-called "curse of dimensionality", well known in vector spaces, is also observed in metric spaces. The search complexity grows sharply with the dimension and with the search radius. We present a general probabilistic framework applicable to any search algorithm and whose net effect is to reduce the search radius. The higher the dimension, the more effective the technique. We illustrate empirically its practical performance on a particular class of algorithms, where large improvements in the search time are obtained at the cost of a very small error probability. (C) 2002 Elsevier Science B.V All rights reserved.
We solve two computational problems concerning plane algebraic curves over finite fields: generating a uniformly random point, and finding all points deterministically in amortized polynomial time (over a prime field,...
详细信息
We solve two computational problems concerning plane algebraic curves over finite fields: generating a uniformly random point, and finding all points deterministically in amortized polynomial time (over a prime field, for nonexceptional curves).
This paper studies the evaluation of routing algorithms from the perspective of reachability routing, where the goal is to determine all paths between a sender and a receiver. Reachability routing is becoming relevant...
详细信息
This paper studies the evaluation of routing algorithms from the perspective of reachability routing, where the goal is to determine all paths between a sender and a receiver. Reachability routing is becoming relevant with the changing dynamics of the Internet and the emergence of low-bandwidth wireless/ad hoc networks. We make the case for reinforcement learning as the framework of choice to realize reachability routing, within the confines of the current Internet infrastructure. The setting of the reinforcement learning problem offers several advantages, including loop resolution, multi-path forwarding capability, cost-sensitive routing, and minimizing state overhead, while maintaining the incremental spirit of current backbone routing algorithms. We identify research issues in reinforcement learning applied to the reachability routing problem to achieve a fluid and robust backbone routing framework. This paper also presents the design, implementation and evaluation of a new reachability routing algorithm that uses a model-based approach to achieve cost-sensitive multi-path forwarding;performance assessment of the algorithm in various troublesome topologies shows consistently superior performance over classical reinforcement learning algorithms. The paper is targeted toward practitioners seeking to implement a reachability routing algorithm. (C) 2003 Published by Elsevier B.V.
Centroidal Voronoi tessellations (CVTs) are Voronoi tessellations of a region such that the generating points of the tessellations are also the centroids of the corresponding Voronoi cells. In this paper, some probabi...
详细信息
Centroidal Voronoi tessellations (CVTs) are Voronoi tessellations of a region such that the generating points of the tessellations are also the centroids of the corresponding Voronoi cells. In this paper, some probabilistic methods for determining CVTs and their parallel implementations on distributed memory systems are presented. By using multi-sampling in a new probabilistic algorithm we introduce, more accurate and efficient approximations of CVTs are obtained without the need to explicit construct Voronoi diagrams. The new algorithm lends itself well to parallelization, i.e., near prefect linear speed up in the number of processors is achieved. The results of computational experiments performed on a CRAY T3E-600 system are provided which illustrate the superior sequential and parallel performance of the new algorithm when compared to existing algorithms. In particular, for the same amount of work, the new algorithms produce significantly more accurate CVTs. (C) 2002 Elsevier Science B.V. All rights reserved.
We present a novel multi-resolution point sample rendering algorithm for keyframe animations. The algorithm accepts triangle meshes of arbitrary topology as input which are animated by specifying different sets of ver...
详细信息
We present a novel multi-resolution point sample rendering algorithm for keyframe animations. The algorithm accepts triangle meshes of arbitrary topology as input which are animated by specifying different sets of vertices at keyframe positions. A multi-resolution representation consisting of prefiltered point samples and triangles is built to represent the animated mesh at different levels of detail. We introduce a novel sampling and stratification algorithm to efficiently generate suitable point sample sets,for moving triangle meshes. Experimental results demonstrate that the new data structure can be used to render highly complex keyframe animations like crowd scenes in real-time.
We develop probabilistic algorithms that solve problems of geometric elimination theory using small memory resources. These algorithms are obtained by means of the adaptation of a general transformation due to A. Boro...
详细信息
We develop probabilistic algorithms that solve problems of geometric elimination theory using small memory resources. These algorithms are obtained by means of the adaptation of a general transformation due to A. Borodin which converts uniform boolean circuit depth into sequential (Turing machine) space. The boolean circuits themselves are developed using techniques based on the computation of a primitive element of a suitable zero-dimensional algebra and diophantine considerations. Our algorithms improve considerably the space requirements of the elimination algorithms based on rewriting techniques (Grobner solving), having simultaneously a time performance of the same kind of them.
We present a new probabilistic algorithm to compute the Smith normal form of a sparse integer matrix A is an element of Z(m*n). The algorithm treats A as a "black box"-A is only used to compute matrix-vector...
详细信息
We present a new probabilistic algorithm to compute the Smith normal form of a sparse integer matrix A is an element of Z(m*n). The algorithm treats A as a "black box"-A is only used to compute matrix-vector products and we do not access individual entries in A directly. The algorithm requires about O(m(2) log parallel toAparallel to) black box evaluations w Aw mod p for word-sized primes p and w is an element of Z(p)(n*1), plus O(m(2) n log parallel toAparallel to + m(3) log(2) parallel toAparallel to) additional bit operations. For sparse matrices this represents a substantial improvement over previously known algorithms. The new algorithm suffers from no "fill-in" or intermediate value explosion, and uses very little additional space. We also present an asymptotically fast algorithm for dense matrices which requires about O(n . MM(m) log parallel toAparallel to + m(3) log(2) parallel toAparallel to) bit operations, where O(MM(m)) operations are sufficient to multiply two m x m matrices over a field. Both algorithms are probabilistic of the Monte Carlo type-on any input they return the correct answer with a controllable, exponentially small probability of error.
A secure reliable multicast protocol enables a process to send a message to a group of recipients such that all correct destinations receive the same message, despite the malicious efforts of fewer than a third of the...
详细信息
A secure reliable multicast protocol enables a process to send a message to a group of recipients such that all correct destinations receive the same message, despite the malicious efforts of fewer than a third of the total number of processes, including the sender. This has been shown to be a useful tool in building secure distributed services, albeit with a cost that typically grows linearly with the size of the system. For very large networks, for which this is prohibitive, we present two approaches for reducing the cost: First, we show a protocol whose cost is on the order of the number of tolerated failures. Secondly, we show how relaxing the consistency requirement to a probabilistic guarantee can reduce the associated cost, effectively to a constant.
Massively parallel computers (MPCs) introduce new requirements for system-level fault diagnosis, like handling a huge number of processing elements in a heterogeneous system. They also have specific attributes, such a...
详细信息
Massively parallel computers (MPCs) introduce new requirements for system-level fault diagnosis, like handling a huge number of processing elements in a heterogeneous system. They also have specific attributes, such as regular topology and low local complexity. Traditional deterministic methods of system-level diagnosis did not consider these issues. This paper presents a new approach, called local information diagnosis that exploits the characteristics of massively parallel systems, The paper defines the diagnostic model, which is based on generalized test invalidation to handle inhomogeneity in multiprocessors. Five effective probabilistic diagnostic algorithms using the proposed method are also given, and their space and time complexity are estimated.
暂无评论