Sparse keypoints based methods allow to match two images in an efficient manner. However, even though they are sparse, not all generated keypoints are necessary. This uselessly increases the computational cost during ...
详细信息
ISBN:
(纸本)9781728198354
Sparse keypoints based methods allow to match two images in an efficient manner. However, even though they are sparse, not all generated keypoints are necessary. This uselessly increases the computational cost during the matching step and can even add uncertainty when these keypoints are not discriminatory enough, thus leading to imprecise, or even wrong, alignment. In this paper, we address the important case where the alignment deals with the same scene or the same type of object. This enables a preliminary learning of optimal keypoints, in terms of efficiency and robustness. Our fully unsupervised selection method is based on a statistical a contrario test on a small set of training images to build without any supervision a dictionary of the most relevant points for the alignment. We show the usefulness of the proposed method on two applications, the stabilization of video surveillance sequences and the fast alignment of industrial objects containing repeated patterns. Our experiments demonstrate an acceleration of the method by 20 factor and significant accuracy gain.
image segmentation is one of the key problems in imageprocessing. Among the different models and approaches developed, some of the commonly used statisticalmethods are based on the intensity homogeneity. In this pap...
详细信息
In order to improve the anti-noise performance of traditional image binarization methods, this paper proposes a novel binarization method for low-quality images based on threshold array system. The proposed method inv...
详细信息
A new method is developed to adjust the forecast in case of a small number of observations or in the absence of standard patterns which can be detected by classical machine learning and statisticalmethods. The effect...
详细信息
Given the vast number of classifiers that have been (and continue to be) proposed, reliable methods for comparing them are becoming increasingly important. The desire for reliability is broken down into three main asp...
We present the system architecture for real-time processing of data that originates in large format tiled imaging arrays used in wide area motion imagery ubiquitous surveillance. High performance and high throughput i...
详细信息
ISBN:
(纸本)9798350305081
We present the system architecture for real-time processing of data that originates in large format tiled imaging arrays used in wide area motion imagery ubiquitous surveillance. High performance and high throughput is achieved through approximate computing and fixed point variable precision (6 bits to 18 bits) arithmetic. The architecture implements a variety of processing algorithms in what we consider today as Third Wave AI and Machine Intelligence ranging from convolutional networks (CNNs) to linear and non-linear morphological processing, probabilistic inference using exact and approximate Bayesian methods and Deep Neural Networks based classification. The processing pipeline is implemented entirely using event based neuromorphic and stochastic computational primitives. An emulation of the system architecture demonstrated processing in real-time 160 x 120 raw pixel data running on a reconfigurable computing platform (5 Xilinx Kintex-7 FPGAs). The reconfigurable computing implementation was developed to emulate the computational structures for a 2.5D System chiplet design, that was fabricated in the 55nm GF CMOS technology. To optimize for energy efficiency of a mixed level system, a general energy aware methodology is applied through the design process at all levels from algorithms and architecture all the way down to technology and devices, while at the same time keeping the operational requirements and specifications for the task at focus.
Process control of advanced semiconductor nodes is not only pushing the limits of metrology equipment requirements in terms of resolution and throughput but also in terms of the richness of data to be extracted to ena...
详细信息
ISBN:
(纸本)9781510672178;9781510672161
Process control of advanced semiconductor nodes is not only pushing the limits of metrology equipment requirements in terms of resolution and throughput but also in terms of the richness of data to be extracted to enable engineers to fine-tune the process steps for increased yield. The move towards 3D structures requires extraction of critical dimension parameters from structures which can vary largely from layer to layer. For in-line process control, the necessary automation forces the development of layer and equipment-specific dedicated imageprocessing algorithms. Similarly, with the increase in stochastic defects in the EUV era, detection of defects at the nm scale requires the identification of features captured in low resolution to meet the throughput requirements of HVM fabs, which can again lead to custom algorithm development. With the emergence of ML-based imageprocessingmethods, this process of algorithm development for both cases can be accelerated. In this work, we provide the general framework under which the images obtained from high-speed scanning probe microscopy-based systems can be used to train a network for either feature detection for parameter extraction or defect identification.
In this paper we propose a general framework to perform statistical online inference in a class of constant step size stochastic approximation (SA) problems, including the well-known stochastic gradient descent (SGD) ...
详细信息
ISBN:
(纸本)9781713871088
In this paper we propose a general framework to perform statistical online inference in a class of constant step size stochastic approximation (SA) problems, including the well-known stochastic gradient descent (SGD) and Q-learning. Regarding a constant step size SA procedure as a time-homogeneous Markov chain, we establish a functional central limit theorem (FCLT) for it under weaker conditions, and then construct confidence intervals for parameters via random scaling. To leverage the FCLT results in the Markov chain setting, an alternative condition that is more applicable for SA problems is established. We conduct experiments to perform inference with both random scaling and other traditional inference methods, and finds that the former has a more accurate and robust performance.
The paper considers the problem of estimating of a two-parameter generalization of Rayleigh distribution and finding distributions of estimates. This problem arose from the needs of statistical sampling inspection of ...
详细信息
Dynamic Digital Humans (DDHs) are 3D digital models that are animated using predefined motions and are inevitably bothered by noise/shift during the generation process and compression distortion during the transmissio...
详细信息
ISBN:
(纸本)9781728198354
Dynamic Digital Humans (DDHs) are 3D digital models that are animated using predefined motions and are inevitably bothered by noise/shift during the generation process and compression distortion during the transmission process, which needs to be perceptually evaluated. Usually, DDHs are displayed as 2D rendered animation videos and it is natural to adapt video quality assessment (VQA) methods to DDH quality assessment (DDH-QA) tasks. However, the VQA methods are highly dependent on viewpoints and less sensitive to geometry-based distortions. Therefore, in this paper, we propose a novel no-reference (NR) geometry-aware video quality assessment method for DDH-QA challenge. Geometry characteristics are described by the statistical parameters estimated from the DDHs' geometry attribute distributions. Spatial and temporal features are acquired from the rendered videos. Finally, all kinds of features are integrated and regressed into quality values. Experimental results show that the proposed method achieves state-of-the-art performance on the DDH-QA database.
暂无评论