With the integration of face recognition technology into important identity applications, it is imperative that the effects of facial aging on face recognition performance are thoroughly understood. As face recognitio...
详细信息
ISBN:
(纸本)9781538607336
With the integration of face recognition technology into important identity applications, it is imperative that the effects of facial aging on face recognition performance are thoroughly understood. As face recognition systems evolve and improve, they should be periodically re-evaluated on large-scale longitudinal face datasets. In our study, we evaluate the performance of two state-of-the-art commercial off the shelf (COTS) face recognition systems on two large-scale longitudinal datasets of mugshots of repeat offenders. The largest of these two datasets has 147,784 images of 18,007 subjects with an average of 8 images per subject over an average time span of 8.5 years. We fit multi-level statistical models to genuine comparison scores (similarity between images of the same face) from the two COTS face matchers. This allows us to analyze the degradation in recognition performance due to elapsed time between a probe (query) and its enrollment (gallery) image. We account for face image quality to obtain a better estimate of trends due to aging, and analyze whether longitudinal trends in genuine scores differ by subject gender and race. Based on the results of our statistical model, we infer that the state-of-the-art COTS matchers can verify 99% of the subjects at a false accept rate (FAR) of 0.01% for up to 10.5 and 8.5 years of elapsed time. Beyond this time lapse of 8.5 years, there is a significant loss in face recognition accuracy. This study extends and confirms the findings of earlier longitudinal studies on face recognition.
The 3D reconstruction can facilitate the diagnosis of liver disease by making the target easier to identify and revealing the volume and shape much better than 2D imaging. In this paper, in order to realize 3D reconst...
详细信息
ISBN:
(纸本)9783319700939;9783319700922
The 3D reconstruction can facilitate the diagnosis of liver disease by making the target easier to identify and revealing the volume and shape much better than 2D imaging. In this paper, in order to realize 3D reconstruction of liver parenchyma, a series of pretreatments are carried out, including windowing conversion, filtering and liver parenchyma extraction. Furthermore, three kinds of modeling methods were researched to reconstruct the liver parenchyma containing surface rending, volume rendering and point rendering. The MC (marching cubes) algorithm based on 3D region growth is proposed to overcome the existence of a large number of voids and long modeling time for the contours of traditional MC algorithms. Simulation results of the three modeling methods show different advantages and disadvantages. The surface rendering can intuitively image on the liver surface modeling, but it cannot reflect the inside information of the liver. The volume rendering can reflect the internal information of the liver, but it requires a higher computer performance. The point rendering modeling speed is quickly compared to the surface rendering and the volume rendering, whereas the modeling effect is rough. Therefore, we can draw a conclusion that different modeling methods should be selected for different requirements.
Unmanned aerial vehicles (UAVs)-based environmental studies are gaining space in recent years due to their advantages of minimal cost, flexibility, and very high spatial resolution. Researchers can acquire imagery acc...
详细信息
Unmanned aerial vehicles (UAVs)-based environmental studies are gaining space in recent years due to their advantages of minimal cost, flexibility, and very high spatial resolution. Researchers can acquire imagery according to their schedule and convenience with the option of alternating the sensors working in visible, infrared, and microwave wavelengths. The recent developments in UAVs and in the associated image-processing techniques extend the fields of UAVs application. Inherent geometric deformation of UAVs images inevitably leads to burgeoning interest in exploring the geographical registration techniques of UAVs images preprocessing. However, atmospheric correction had been generally neglected due to the low altitudes of UAVs platforms. The path radiance of low-latitude atmosphere misleads the reflectance of target objects. Thus, a valid atmospheric correction is essential in the cases where vegetation indices (VIs) are adopted in vegetation monitoring. The off-the-shelf atmospheric correction algorithms adopted in satellite-based remote sensing are typically ill-suited for UAVs-based images due to the distinctly different altitudes and radiation transfer modes. This article identified the effect of atmospheric attenuation for spectral data collected by UAVs sensors of different altitudes and developed a physical-based atmospheric correction algorithm of UAVs images. Field-measured reflectance spectrum was essential in modelling. A sunny and dry day and a flat terrain were the two prerequisites to ensure the general application of the developed algorithm. A case study was subsequently carried out to verify the utility of the developed algorithm, and the results showed that VIs based on the UAVs images of different altitudes had a similar ability in vegetation assessment as groundbased recordings. However, the assessment accuracy could be clearly improved by using the developed atmospheric correction algorithm.
Auditing of certificates and bills images is pervasive in ERP systems. However, the scanned or camera-captured images sending to an ERP system are not always of good quality. In order to automate the auditing of certi...
详细信息
ISBN:
(纸本)9781538620083
Auditing of certificates and bills images is pervasive in ERP systems. However, the scanned or camera-captured images sending to an ERP system are not always of good quality. In order to automate the auditing of certificates and bills, and to alleviate the low recognition rate caused by the low quality image in all kinds of certificates and bills automatic analysis and processing system, this paper proposes a method for detecting and filtering out images with low quality, leaving only high quality images, to improve the recognition rate of the auditing of certificates and bills. Unlike other image quality assessment algorithms, which only deal with the blur or noise, the proposed method comprehensively and practically considers a variety of key factors (clarity, color-bias, noise, abnormal brightness areas etc.) which affect the image quality in the process of certificates and bills assessment. The method is applied to detect image quality in certificates and bills automatic verification system, and has achieved good unbiasedness and high sensitivity in real-world ERP applications.
We present efficient Schur parametrization algorithms for a subclass of near-stationary second-order stochastic processes which we call p-stationary processes. This approach allows for complexity reduction of the gene...
详细信息
ISBN:
(纸本)9781509063451
We present efficient Schur parametrization algorithms for a subclass of near-stationary second-order stochastic processes which we call p-stationary processes. This approach allows for complexity reduction of the general linear Schur algorithm in a uniform way and results in a hierachical class of the algorithms, suitable for efficient implementations, being a good starting point for nonlinear generalizations.
The article deals with the problem of segmentation of digital images, which is one of the main tasks in the field of digital imageprocessing (IP) and computer vision. To solve this problem, an algorithm was proposed ...
The article deals with the problem of segmentation of digital images, which is one of the main tasks in the field of digital imageprocessing (IP) and computer vision. To solve this problem, an algorithm was proposed based on the use of a concept based on the theory of fuzzy sets. The main idea of the proposed algorithm is the formation of subsets of interconnected pixels based on the fuzzy-to-mean method. A distinctive feature of the proposed algorithm is the definition of a set of features that define areas with similar characteristics in the space of the characteristic features of the analyzed image. The proposed segmentation algorithm (SA) consists of two stages: 1) the formation of characteristic features for all channels of the base color; 2) clustering of image elements. The practical significance of the obtained results lies in the fact that the developed models of algorithms can be used in various applied problems, where the classification of objects represented as images is provided. To test the efficiency of the developed algorithm, experimental studies were carried out in solving a number of applied problems related to color image segmentation, in particular, license plate recognition problems.
Uterine cervical cancer is the second most common cancer in women worldwide. The accuracy of colposcopy is highly dependent on the physicians individual skills. In expert hands, colposcopy has been reported to have a ...
详细信息
ISBN:
(纸本)9789526865300
Uterine cervical cancer is the second most common cancer in women worldwide. The accuracy of colposcopy is highly dependent on the physicians individual skills. In expert hands, colposcopy has been reported to have a high sensitivity (96%) and a low specificity (48%) when differentiating abnormal tissues. This leads to a significant interest to activities aimed at the new diagnostic systems and new automatic methods of coloposcopic images analysis development. The presented paper is devoted to developing method based on analyses fluorescents images obtained with different excitation wavelength. The sets of images were obtained in clinic by multispectral colposcope LuxCol. The images for one patient includes: images obtained with white light illumination and with polarized white light;fluorescence image obtained by excitation at wavelength of 360nm, 390nm, 430nm and 390nm with 635 nm laser. Our approach involves images acquisition, imageprocessing, features extraction, selection of the most informative features and the most informative image types, classification and pathology map creation. The result of proposed method is the pathology map - the image of cervix shattered on the areas with the definite diagnosis such as norm, CNI (chronic nonspecific inflammation), CIN(cervical intraepithelial neoplasia). The obtained result on the border CNI/CIN sensitivity is 0.85, the specificity is 0.78. Proposed algorithms gives possibility to obtain correct differential pathology map with probability 0.8. Obtained results and classification task characteristics shown possibility of practical application pathology map based on fluorescents images.
In this paper, we address the problem of parametric space dimension reduction in the interpolation of multidimensional signals task. We develop adaptive parameterized interpolation algorithms for multidimensional sign...
In this paper, we address the problem of parametric space dimension reduction in the interpolation of multidimensional signals task. We develop adaptive parameterized interpolation algorithms for multidimensional signals. We perform a dimension reduction of the parameter space to reduce the complexity of optimizing such algorithms. The dependences of the samples inside the signal sections and between the signal sections are taken into account in various ways to reduce the dimension. We consider the dependencies between the signal sections through the approximation algorithm for the sections. We take into account the sample dependencies inside sections due to an adaptive parameterized interpolation algorithm. As a result, we solve the optimization problem of an adaptive interpolator in the parameter space of lower dimension for each signal section separately. To study the effectiveness of adaptive interpolators, we perform computational experiments using real-world multidimensional signals. Experimental results showed that the proposed interpolator improves the efficiency of the compression method up to 10% compared with the prototype algorithm.
Back propagation neural network(BP neural network) is a type of multi-layer feed forward network which spread positively, while the error spread backwardly. Since BP network has advantages in learning and storing the ...
详细信息
ISBN:
(数字)9781510609921
ISBN:
(纸本)9781510609914;9781510609921
Back propagation neural network(BP neural network) is a type of multi-layer feed forward network which spread positively, while the error spread backwardly. Since BP network has advantages in learning and storing the mapping between a large number of input and output layers without complex mathematical equations to describe the mapping relationship, it is most widely used. BP can iteratively compute the weight coefficients and thresholds of the network based on the training and back propagation of samples, which can minimize the error sum of squares of the network. Since the boundary of the computed tomography (CT) heart images is usually discontinuous, and it exist large changes in the volume and boundary of heart images, The conventional segmentation such as region growing and watershed algorithm can't achieve satisfactory results. Meanwhile, there are large differences between the diastolic and systolic images. The conventional methods can'tt accurately classify the two cases. In this paper, we introduced BP to handle the segmentation of heart images. We segmented a large amount of CT images artificially to obtain the samples, and the BP network was trained based on these samples. To acquire the appropriate BP network for the segmentation of heart images, we normalized the heart images, and extract the gray-level information of the heart. Then the boundary of the images was input into the network to compare the differences between the theoretical output and the actual output, and we reinput the errors into the BP network to modify the weight coefficients of layers. Through a large amount of training, the BP network tend to be stable, and the weight coefficients of layers can be determined, which means the relationship between the CT images and the boundary of heart.
Machine learning has revolutionized a number of fields, but many micro-tomography users have never used it for their work. The micro-tomography beamline at the Advanced Light Source (ALS), in collaboration with the Ce...
详细信息
ISBN:
(纸本)9781510612402;9781510612396
Machine learning has revolutionized a number of fields, but many micro-tomography users have never used it for their work. The micro-tomography beamline at the Advanced Light Source (ALS), in collaboration with the Center for Applied Mathematics for Energy Research Applications (CAMERA) at Lawrence Berkeley National Laboratory, has now deployed a series of tools to automate data processing for ALS users using machine learning. This includes new reconstruction algorithms, feature extraction tools, and image classification and recommendation systems for scientific image. Some of these tools are either in automated pipelines that operate on data as it is collected or as stand-alone software. Others are deployed on computing resources at Berkeley Lab-from workstations to supercomputers-and made accessible to users through either scripting or easy-to-use graphical interfaces. This paper presents a progress report on this work.
暂无评论