Locality Sensitive Hashing (LSH) is widely recognized as one of the most. promising approaches to similarity search in high-dimensional spaces. Based on LSH, a considerable number of nearest neighbor search algorithms...
详细信息
ISBN:
(纸本)9781450322638
Locality Sensitive Hashing (LSH) is widely recognized as one of the most. promising approaches to similarity search in high-dimensional spaces. Based on LSH, a considerable number of nearest neighbor search algorithms have been proposed in the past, with some of them having been used in many real-life applications. Apart from their demonstrated superior performance in practice, the popularity of the LSH algorithms is mainly due to their provable performance bounds on query cost, space consumption and failure probability. In this paper, we show that a surprising gap exists between the LSH theory and widely practiced algorithm analysis techniques. In particular, we discover that a critical assumption made in the classical LSH algorithm analysis does not hold in practice, which suggests that using the existing methods to analyze the performance of practical LSH algorithms is a conceptual mismatch. To address this problem, a novel analysis model is developed that bridges the gap between the LSH theory and the method for analyzing the LSH algorithm performance. With the help of this model, we identify some important flaws in the commonly used analysis methods in the LSH literature. The validity of this model is verified through extensive experiments with real datasets.
System validation in the industry needs a lot of servers to generate test sets and run them on the validated processors or SoCs. With the requirement of a competitive time-to-market, this process has to happen speedil...
详细信息
We present a framework to verify both, functional correctness and (amortized) worst-case complexity of practically efficient algorithms. We implemented a stepwise refinement approach, using the novel concept of resour...
详细信息
We present a framework to verify both, functional correctness and (amortized) worst-case complexity of practically efficient algorithms. We implemented a stepwise refinement approach, using the novel concept of resource currencies to naturally structure the resource analysis along the refinement chain, and allow a fine-grained analysis of operation counts. Our framework targets the LLVM intermediate representation. We extend its semantics from earlier work with a cost model. As case studies, we verify the amortized constant time push operation on dynamic arrays and the O(n log n) introsort algorithm, and refine them down to efficient LLVM implementations. Our sorting algorithm performs on par with the state-of-the-art implementation found in the GNU C++ Library, and provably satisfies the complexity required by the C++ standard.
作者:
Liu, YongliangKim, Hee-JinARS
Cotton Struct & Qual Res Unit USDA New Orleans LA 70124 USA ARS
Cotton Fiber Bioscience Res Unit USDA New Orleans LA 70124 USA
With cotton fiber growth or maturation, cellulose content in cotton fibers markedly increases. Traditional chemical methods have been developed to determine cellulose content, but it is time-consuming and labor-intens...
详细信息
With cotton fiber growth or maturation, cellulose content in cotton fibers markedly increases. Traditional chemical methods have been developed to determine cellulose content, but it is time-consuming and labor-intensive, mostly owing to the slow hydrolysis process of fiber cellulose components. As one approach, the attenuated total reflection Fourier transform infrared (ATR FT-IR) spectroscopy technique has also been utilized to monitor cotton cellulose formation, by implementing various spectral interpretation strategies of both multivariate principal component analysis (PCA) and 1-, 2- or 3-band/-variable intensity or intensity ratios. The main objective of this study was to compare the correlations between cellulose content determined by chemical analysis and ATR FT-IR spectral indices acquired by the reported procedures, among developmental Texas Marker-1 (TM-1) and immature fiber (im) mutant cotton fibers. It was observed that the R value, CIIR, and the integrated intensity of the 895 cm(-1) band exhibited strong and linear relationships with cellulose content. The results have demonstrated the suitability and utility of ATR FT-IR spectroscopy, combined with a simple algorithm analysis, in assessing cotton fiber cellulose content, maturity, and crystallinity in a manner which is rapid, routine, and non-destructive.
For getting the parametric equation of random polygon, by the basic extension factor to stretch the round, there generates polygon of meeting the required conditions. Such algorithm overcomes the disadvantage in the g...
详细信息
ISBN:
(纸本)9783037855911
For getting the parametric equation of random polygon, by the basic extension factor to stretch the round, there generates polygon of meeting the required conditions. Such algorithm overcomes the disadvantage in the generation of polygon from round under control of a single parameter and the complication of mathematical background. Firstly, a detailed introduction is given to the basic extension factor and the selection of relevant coefficient, and then the errors of such algorithm are analyzed. And the experimental result shows that the mathematical background of such method is relatively easy, the parameters are easy to control and constitute, and the generation of arbitrary polygon is available with high precision. Such the parametric equation of random polygon and method can be used in relevant fields such as geometric modeling, CAD/CAM, and computer graphics.
Returning adult students typically have expectations of gaining both near-term applicable skill sets and long-term foundational concepts from their courses. Most algorithm analysis classes are designed for traditional...
详细信息
Returning adult students typically have expectations of gaining both near-term applicable skill sets and long-term foundational concepts from their courses. Most algorithm analysis classes are designed for traditional students and do not have teaching practical skill set as one of their major goals. This paper describes an attempt to meet the expectations of both applicable skills and foundational concepts in an algorithm analysis class. The class emphasizes implementation and experimentation while covering concepts supporting students' future self-learning. We also present our approach to support the teaching of this highly demanding course to students with busy life schedules.
Accurately detecting low concentrations of ethyl acetate (EA) holds promise for the early screening of rectal and gastric cancer. The primary challenges lie in achieving a high response at parts per billion level conc...
详细信息
Accurately detecting low concentrations of ethyl acetate (EA) holds promise for the early screening of rectal and gastric cancer. The primary challenges lie in achieving a high response at parts per billion level concentration and ensuring high selectivity. This study focuses on designing Fe-Ce-O bimetallic oxides with doping and heterogeneous interfaces, which exhibit outstanding redox properties and highly enhanced ability of the adsorption and activation of both O-2 and EA molecules. Benefiting from the violent reaction between EA and the adsorbed oxygen species, the sensor achieves an ultrahigh ethyl acetate sensing response of more than 500,000 at 200 ppm concentration, along with an ultrafast recovery rate (<5 s). In experiments, the response can reach 4.8 even at an extremely low concentration of 10 ppb. Special attention is given to the interfacial chemical reactions through in situ DRIFTS during the sensing process. We propose for the first time that the produced intermediate byproducts (acetaldehyde, ethyl alcohol, acetic acid, and formic acid) coresponse on this sensor, contributing to its ultrahigh sensing response. Furthermore, both EA and the byproducts are effectively classified using linear discriminant analysis with 95% accuracy. This work is expected to elucidate the interfacial sensing mechanisms, particularly the contributions of derived byproducts to the sensor's response, and to propose a novel idea for designing high-performance sensors.
The matrix adaptation evolution strategy is a simplified covariance matrix adaptation evolution strategy with reduced computational cost. Using it as a search engine, several algorithms have been recently proposed for...
详细信息
The matrix adaptation evolution strategy is a simplified covariance matrix adaptation evolution strategy with reduced computational cost. Using it as a search engine, several algorithms have been recently proposed for constrained optimization and have shown excellent performance. However, these algorithms require the simultaneous application of multiple techniques to handle constraints, and also require gradient information. This makes them inappropriate for handling non-differentiable functions. This paper proposes a new matrix adaption evolutionary strategy for constrained optimization using helper and equivalent objectives. The method optimizes two objectives but with no need for special handling of infeasible solutions and without gradient information. A new mechanism is designed to adaptively adjust the weights of the two objectives according to the convergence rate. The efficacy of the proposed algorithm is evaluated using computational experiments on the IEEE CEC 2017 Constrained Optimization Competition benchmarks. Experimental results demonstrate that it outperforms current state-ofthe-art evolutionary algorithms. Furthermore, this paper provides sufficient conditions for the convergence of helper and equivalent objective evolutionary algorithms and proves that using helper objectives can reduce the likelihood of premature convergence.
This paper provides a method of producing a minimum cost spanning tree (MCST) using set operations. It studies the data structure for implementation of set operations and the algorithm to be applied to this structure ...
详细信息
This paper provides a method of producing a minimum cost spanning tree (MCST) using set operations. It studies the data structure for implementation of set operations and the algorithm to be applied to this structure and proves the correctness and the complexity of the algorithm. This algorithm uses the FDG (formula to divide elements into groups) to sort (the FDG sorts a sequence of n elements in expected tir O(n)) and uses the method of path compression to find and to unite. Therefore. n produces an MCST of an undirected network having n vertices and e edges in expected time O(eG(n)).
The well-known towers of Hanoi puzzle consists of a set of n disks of unequal size threaded onto 3 needles such that no disk rests on a smaller one. A legal move consists of removing a top disk from one needle and tr...
详细信息
The well-known towers of Hanoi puzzle consists of a set of n disks of unequal size threaded onto 3 needles such that no disk rests on a smaller one. A legal move consists of removing a top disk from one needle and transferring it to another without violating the above rule. All recursive and iterative algorithms for the problem assume that the puzzle is initially in the stack-configuration and perform the unique minimal sequence of moves to reach the final state. However, a person attempting to solve the puzzle may execute a different sequence of legal moves and leave the puzzle unfinished with any arrangement of the disks. If such an arrangement is one of the configurations of the minimal sequence of moves, someone else can finish the puzzle applying the minimal algorithm. Walsh (1982) has given a rule for identifying such cases. If the arrangement is not an intermediate configuration of the minimal sequence, this rule can recognize the error but cannot correct it.
暂无评论