Background: All infectious disease oriented clinical diagnostic assays in use today focus on detecting the presence of a single, well defined target agent or a set of agents. In recent years, microarray-based diagnost...
详细信息
Background: All infectious disease oriented clinical diagnostic assays in use today focus on detecting the presence of a single, well defined target agent or a set of agents. In recent years, microarray-based diagnostics have been developed that greatly facilitate the highly parallel detection of multiple microbes that may be present in a given clinical specimen. While several algorithms have been described for interpretation of diagnostic microarrays, none of the existing approaches is capable of incorporating training data generated from positive control samples to improve performance. Results: To specifically address this issue we have developed a novel interpretive algorithm, VIPR (Viral Identification using a probabilistic algorithm), which uses Bayesian inference to capitalize on empirical training data to optimize detection sensitivity. To illustrate this approach, we have focused on the detection of viruses that cause hemorrhagic fever (HF) using a custom HF-virus microarray. VIPR was used to analyze 110 empirical microarray hybridizations generated from 33 distinct virus species. An accuracy of 94% was achieved as measured by leave-one-out cross validation. Conclusions VIPR outperformed previously described algorithms for this dataset. The VIPR algorithm has potential to be broadly applicable to clinical diagnostic settings, wherein positive controls are typically readily available for generation of training data.
Recent advancement in the field of video communication requires an efficient video compression technique which can transmit a good quality video with very low bandwidth. Mesh based video compression techniques are oft...
详细信息
Recent advancement in the field of video communication requires an efficient video compression technique which can transmit a good quality video with very low bandwidth. Mesh based video compression techniques are often used to achieve this goal. A mesh is formed on the boundary of an object inside a video frame and nodes of the mesh are transmitted to describe the motion of the object. Motion vectors at the nodes decide on the coarse-to-fine hierarchy of a mesh. Sometimes this technique fails to construct a proper hierarchical mesh for an object and cannot capture its motion properly. This paper introduces a new probabilistic approach to construct a hierarchical mesh which can describe the motion of an object with fare accuracy without knowing the proper shape of the object. This algorithm can also sense the amount of motion and select the reference frames accordingly in proper places so that minimum information is required to be transmitted without compromising on the quality of the video.
A probabilistic damage diagnostic algorithm based on correlation analysis was investigated to locate single or multiple damage. To highlight the changes in signals corresponding to the presence of damage, digital dama...
详细信息
A probabilistic damage diagnostic algorithm based on correlation analysis was investigated to locate single or multiple damage. To highlight the changes in signals corresponding to the presence of damage, digital damage fingerprints (DDFs) were extracted from the captured Lamb wave signals. The algorithm was validated through experimental studies where dual artificially introduced notches in an aluminum plate were successfully located using the constructed images of the probability of the presence of damage. Damage identification using either the captured wave signals or their DDFs agreed well with the actual situations. The concept of virtual sensing paths (VSPs) was proposed to enhance the performance of the algorithm. The results demonstrated that the correlation-based algorithm with the applications of DDFs and VSPs was capable of identifying multiple damage in plate-like structures.
作者:
Demin, Alexandervan der Hoeven, JorisHSE Univ
Fac Comp Sci Pokrovsky Blvd 11c4 Moscow 109028 Russia Inst Polytech Paris
CNRS Ecole Polytech CNRSLab InformatEcole Polytech LIXUMR 7161 Batiment Alan TuringCS350031 Rue Honore Estienne F-91120 Palaiseau France
Consider a sparse polynomial in several variables given explicitly as a sum of non-zero terms with coefficients in an effective field. In this paper, we present several algorithms for factoring such polynomials and re...
详细信息
Consider a sparse polynomial in several variables given explicitly as a sum of non-zero terms with coefficients in an effective field. In this paper, we present several algorithms for factoring such polynomials and related tasks (such as gcd computation, square- free factorization, content-free factorization, and root extraction). Our methods are all based on sparse interpolation, but follow two main lines of attack: iteration on the number of variables and more direct reductions to the univariate or bivariate case. We present detailed probabilistic complexity bounds in terms of the complexity of sparse interpolation and evaluation. (c) 2025 Elsevier Inc. All rights are reserved, including those for text and data mining, AI training, and similar technologies.
Solving a polynomial system over a finite field is an NP-complete problem of fundamental importance in both pure and applied mathematics. In particular, the security of the so-called multivariate public-key cryptosyst...
详细信息
Solving a polynomial system over a finite field is an NP-complete problem of fundamental importance in both pure and applied mathematics. In particular, the security of the so-called multivariate public-key cryptosystems, such as HFE of Patarin and UOV of Kipnis et al., is based on the postulated hardness of solving quadratic polynomial systems over a finite field. Lokshtanov et al. (2017) were the first to introduce a probabilistic algorithm that, in the worst-case, solves a Boolean polynomial system in time O*(2(delta n)), for some delta is an element of (0, 1) depending only on the degree of the system, thus beating the brute-force complexity O*(2(n)). Later, Bjorklund et al. (2019) and then Dinur (2021) improved this method and devised probabilistic algorithms with a smaller exponent coefficient delta. We survey the theory behind these probabilistic algorithms, and we illustrate the results that we obtained by implementing them in C. In particular, for random quadratic Boolean systems, we estimate the practical complexities of the algorithms and their probabilities of success as their parameters change. (C) 2021 Elsevier B.V. All rights reserved.
We introduce a clock synchronization algorithm, based on the one presented by Cristian in [1]. Our method significantly improves the performance of the original algorithm, by using a more accurate remote clock reading...
详细信息
We introduce a clock synchronization algorithm, based on the one presented by Cristian in [1]. Our method significantly improves the performance of the original algorithm, by using a more accurate remote clock reading rule. We obtain an algorithm which keeps the clocks synchronized (within a given precision) with less messages per time unit. The protocol is validated and its effectiveness is evaluated by simulation.
A distributed trigger counting (DTC) problem is to detect w triggers in the distributed system consisting of n nodes. DTC algorithms can be used for monitoring systems using sensors to detect a significant global chan...
详细信息
A distributed trigger counting (DTC) problem is to detect w triggers in the distributed system consisting of n nodes. DTC algorithms can be used for monitoring systems using sensors to detect a significant global change. When designing an efficient DTC algorithm, the following goals should be considered;minimizing the whole number of exchanged messages used for counting triggers and even distribution of communication loads among nodes. In this paper, we present an efficient DTC algorithm, DDR-coin (Deterministic Detection of Randomly generated coins). The message complexity-the total number of exchanged messages-of DDR-coin is O(nlog(n)(w/n)) in average. MaxRcvLoad-the maximum number of received messages to detect w triggers in each node-is O(log(n)(w/n)) on average. DDR-coin is not an exact algorithm;even though w triggers are received by the n nodes, it can fail to raise an alarm with a negligible probability. However, DDR-coin is more efficient than exact DTC algorithms on average and the gap between those is increased for larger n. We implemented the prototype of the proposed scheme using NetLogo 6.1.1. We confirmed that experimental results are close to our mathematical analysis. Compared with the previous schemes-TreeFill, CoinRand, and RingRand- DDR-coin shows smaller message complexity and MaxRcvLoad.
When using information retrieval systems, information related to searches is typically stored in files, which are well known as log files. By contrast, past search results of previously submitted queries are ignored m...
详细信息
When using information retrieval systems, information related to searches is typically stored in files, which are well known as log files. By contrast, past search results of previously submitted queries are ignored most of the time. Nevertheless, past search results can be profitable for new searches. Some approaches in Information Retrieval exploit the previous searches in a customizable way for a single user. On the contrary, approaches that deal with past searches collectively are less common. This paper deals with such an approach, by using past results of similar past queries submitted by other users, to build the answers for new submitted queries. It proposes two Monte Carlo algorithms to build the result for a new query by selecting relevant documents associated to the most similar past query. Experiments were carried out to evaluate the effectiveness of the proposed algorithms using several dataset variants. These algorithms were also compared with the baseline approach based on the cosine measure, from which they reuse past results. Simulated datasets were designed for the experiments, following the Cranfield paradigm, well established in the Information Retrieval domain. The empirical results show the interest of our approach.
Generators of finite cyclic groups play important role in many cryptographic algorithms like public key ciphers, digital signatures, entity identification and key agreement algorithms. The above kinds of cryptographic...
详细信息
Generators of finite cyclic groups play important role in many cryptographic algorithms like public key ciphers, digital signatures, entity identification and key agreement algorithms. The above kinds of cryptographic algorithms are crucial for all secure communication in computer networks and secure information processing (in particular in mobile services, banking and electronic administration). In the paper, proofs of correctness of two probabilistic algorithms (for finding generators of finite cyclic groups and primitive roots) are given along with assessment of their average time computational complexity.
A maximum matching is a matching of maximum cardinality. The set of nodes taking part in a maximum matching is denoted by Nodes(G) and the cardinality of the matching is denoted by Card(G). This analysis presents 2 ...
详细信息
A maximum matching is a matching of maximum cardinality. The set of nodes taking part in a maximum matching is denoted by Nodes(G) and the cardinality of the matching is denoted by Card(G). This analysis presents 2 algorithms for finding a maximal nonsingular square submatrix of AG (or equivalently, Card(G) and Nodes(G)) in O(sbeta-1t) arithmetic operations, where O(nbeta) is the complexity of matrix multiplication. The best bound on beta currently is claimed to be 2.49+, and recent developments may decrease it even further.
暂无评论