Piecewise polynomial approximation (PPA) on nonlinear functions plays an important role in high-precision computing. In this article, we proposed QPA, an integration of error-flattened quantization-aware PPA methods, ...
详细信息
Piecewise polynomial approximation (PPA) on nonlinear functions plays an important role in high-precision computing. In this article, we proposed QPA, an integration of error-flattened quantization-aware PPA methods, to generate the optimized coefficients for efficient hardware implementations targeting any polynomial order. QPA incorporated four key features to minimize the fitting error and the hardware cost, including using the Remez algorithm to compute the minimax fitting polynomial, combining the fitting and quantization operations to get an error-flattened characteristic, assigning specific coefficient bit width to each multiplier to reduce the hardware cost, and fine-tuning the truncated coefficients to further reduce the fitting error. Experimental results showed that our methods consistently achieved the lowest fitting error compared with the state-of-the-art error-flattened piecewise approximation methods. We synthesized the proposed designs with 28-nm TSMC CMOS technology. The results showed that the proposed designs achieved up to 37.0% area reduction and 50.5% power consumption reduction compared to the state-of-the-art error-flattened piecewise linear (PWL) method, and up to 27.0% area reduction, 21.4% delay reduction, and 20.8% power consumption reduction compared to the state-of-the-art error-flattened piecewise quadratic (PWQ) method.
We investigate the problem of hierarchically clustering data streams containing metric data in R-d. We introduce a desirable invariance property for such algorithms, describe a general family of hyperplane-based metho...
详细信息
ISBN:
(纸本)9781713845065
We investigate the problem of hierarchically clustering data streams containing metric data in R-d. We introduce a desirable invariance property for such algorithms, describe a general family of hyperplane-based methods enjoying this property, and analyze two scalable instances of this general family against recently popularized similarity/dissimilarity-based metrics for hierarchical clustering. We prove a number of new results related to the approximation ratios of these algorithms, improving in various ways over the literature on this subject. Finally, since our algorithms are principled but also very practical, we carry out an experimental comparison on both synthetic and real-world datasets showing competitive results against known baselines.
Curve approximation is a challenging issue to precisely depict exquisite shapes of natural phenomena, in which the piecewise Bezier curve is one of the most widely utilized tools due to its beneficial properties. It i...
详细信息
Curve approximation is a challenging issue to precisely depict exquisite shapes of natural phenomena, in which the piecewise Bezier curve is one of the most widely utilized tools due to its beneficial properties. It is essential to determine the quantity and location of control points through the process of generating the mathematical representation of desired objects. This paper presents a new algorithm called adaptive extension fitting scheme (AEFS) to determine a piecewise Bezier curve that best fits a given sequence of data points as well as locate the coordinates of the connecting points between the pieces adaptively. Taking full advantage of the scalability of the Bezier curve segment, AEFS is effective in sequential knot searching within an impressively small computational consumption. The capability of the proposed stepwise extension strategy is deduced from rigorous theoretical proof, resulting in proper connecting points together with well-fitted Bezier curves. The proposed algorithm is evaluated by some popular benchmarks for curve fitting, and compared with several state-of-the-art approaches. Experimental results indicate that AEFS outperforms other models involved in terms of execution time, fitting accuracy, number of segments, and the authenticity of shape contours.
The sparse estimation methods that utilize the l(p)-norm, with p being between 0 and 1, have shown better utility in providing optimal solutions to the inverse problem in diffuse optical tomography. These l(p)-norm-ba...
详细信息
The sparse estimation methods that utilize the l(p)-norm, with p being between 0 and 1, have shown better utility in providing optimal solutions to the inverse problem in diffuse optical tomography. These l(p)-norm-based regularizations make the optimization function nonconvex, and algorithms that implement l(p)-norm minimization utilize approximations to the original l(p)-norm function. In this work, three such typical methods for implementing the l(p)-norm were considered, namely, iteratively reweighted l(1)-minimization (IRL1), iteratively reweighted least squares (IRLS), and the iteratively thresholding method (ITM). These methods were deployed for performing diffuse optical tomographic image reconstruction, and a systematic comparison with the help of three numerical and gelatin phantom cases was executed. The results indicate that these three methods in the implementation of l(p)-minimization yields similar results, with IRL1 fairing marginally in cases considered here in terms of shape recovery and quantitative accuracy of the reconstructed diffuse optical tomographic images. (C) 2014 Optical Society of America
Recently, data-driven and learning-based algorithms for low rank matrix approximation were shown to outperform classical data-oblivious algorithms by wide margins in terms of accuracy. Those algorithms are based on th...
详细信息
ISBN:
(纸本)9781713845393
Recently, data-driven and learning-based algorithms for low rank matrix approximation were shown to outperform classical data-oblivious algorithms by wide margins in terms of accuracy. Those algorithms are based on the optimization of sparse sketching matrices, which lead to large savings in time and memory during testing. However, they require long training times on a large amount of existing data, and rely on access to specialized hardware and software. In this work, we develop new data-driven low rank approximation algorithms with better computational efficiency in the training phase, alleviating these drawbacks. Furthermore, our methods are interpretable: while previous algorithms choose the sketching matrix either at random or by black-box learning, we show that it can be set (or initialized) to clearly interpretable values extracted from the dataset. Our experiments show that our algorithms, either by themselves or in combination with previous methods, achieve significant empirical advantages over previous work, improving training times by up to an order of magnitude toward achieving the same target accuracy.
Due to the probability characteristics of quantum mechanism, the combination of quantum mechanism and intelligent algorithm has received wide attention. Quantum dynamics theory uses the Schr?dinger equation as a quant...
详细信息
Due to the probability characteristics of quantum mechanism, the combination of quantum mechanism and intelligent algorithm has received wide attention. Quantum dynamics theory uses the Schr?dinger equation as a quantum dynamics equation. Through three approximation of the objective function, quantum dynamics framework(QDF) is obtained which describes basic iterative operations of optimization algorithms. Based on QDF, this paper proposes a potential barrier estimation(PBE) method which originates from quantum mechanism. With the proposed method, the particle can accept inferior solutions during the sampling process according to a probability which is subject to the quantum tunneling effect, to improve the global search capacity of optimization *** effectiveness of the proposed method in the ability of escaping local minima was thoroughly investigated through double well function(DWF), and experiments on two benchmark functions sets show that this method significantly improves the optimization performance of high dimensional complex functions. The PBE method is quantized and easily transplanted to other algorithms to achieve high performance in the future.
Distributed Arithmetic Coding (DAC) is a practical realization of Slepian-Wolf coding, one of whose properties is Coset Cardinality Spectrum (CCS). The initial CCS is especially important because it has many applicati...
详细信息
Distributed Arithmetic Coding (DAC) is a practical realization of Slepian-Wolf coding, one of whose properties is Coset Cardinality Spectrum (CCS). The initial CCS is especially important because it has many applications. Up to now, the initial CCS is calculable only for some discrete rates, while in general cases, the time-consuming numerical algorithm is needed. Though a polynomial approximation of the initial CCS has been proposed recently, its complexity becomes very high as code rate decreases. Hence, this letter aims at finding simpler approximations for the initial CCS at low rates by proposing two methods: interpolation approximation and bell-shaped approximation. The effectiveness of both methods is illustrated by simulation results.
Traditional clustering algorithms often focus on the most fine-grained information and achieve clustering by calculating the distance between each pair of data points or implementing other calculations based on points...
详细信息
Traditional clustering algorithms often focus on the most fine-grained information and achieve clustering by calculating the distance between each pair of data points or implementing other calculations based on points. This way is not inconsistent with the cognitive mechanism of "global precedence" in the human brain, resulting in those methodsbad performance in efficiency, generalization ability, and robustness. To address this problem, we propose a new clustering algorithm called granular-ball clustering via granular-ball computing. First, clustering algorithm based on granular-ball (GBCT) generates a smaller number of granular-balls to represent the original data and forms clusters according to the relationship between granular-balls, instead of the traditional point relationship. At the same time, its coarse-grained characteristics are not susceptible to noise, and the algorithm is efficient and robust;besides, as granular-balls can fit various complex data, GBCT performs much better in nonspherical datasets than other traditional clustering methods. The completely new coarse granularity representation method of GBCT and cluster formation mode can also be used to improve other traditional methods.
This article presents a general approximation-theoretic framework to analyze measure transport algorithms for probabilistic modeling. A primary motivating application for such algorithms is sampling-a central task in ...
详细信息
Subset selection for the rank k approximation of an n × d matrix A offers improvements in the interpretability of matrices, as well as a variety of computational savings. This problem is well-understood when the ...
详细信息
暂无评论