Several algorithms have been proposed for constrained face recognition applications. Among them the eigenphases algorithm and some variations of it using sub-block processing, appears to be desirable alternatives beca...
详细信息
Several algorithms have been proposed for constrained face recognition applications. Among them the eigenphases algorithm and some variations of it using sub-block processing, appears to be desirable alternatives because they achieves high face recognition rate, under controlled conditions. However, their performance degrades when the face images under analysis present variations in the illumination conditions as well as partial occlusions. To overcome these problems, this paper derives the optimal sub-block size that allows improving the performance of previously proposed eigenphases algorithms. Theoretical and computer evaluation results show that, using the optimal block size, the identification performance of the eigenphases algorithm significantly improves, in comparison with the conventional one, when the face image presents different illumination conditions and partial occlusions respectively. The optimal sub-block size also allows achieving a very low false acceptance and false rejection rates, simultaneously, when performing identity verification tasks, which is not possible to obtain using the conventional approach: as well as to improve the performance of other sub-block-based eigenphases methods when rank tests are performed. (C) 2012 Elsevier B.v. All rights reserved.
algorithms for real-time imaging in medicine are continuously improving. For example an image reconstruction requires rapid processing of large volume of data obtained. One way to speed up certain parts of the data pr...
详细信息
ISBN:
(纸本)9781467359290;9781467359283
algorithms for real-time imaging in medicine are continuously improving. For example an image reconstruction requires rapid processing of large volume of data obtained. One way to speed up certain parts of the data processing is the use of FPGAs. Using this technology can further enhance the integration of electronic systems to the level of mobile systems. The objective of this paper is to reveal and demonstrate the possibility of accelerating algorithms in medical imaging technique using FPGA. Conventional graphical outputs are implemented using the graphical control unit with a large video memory. For imaging, however, can be used also such methods which generate images in real time without using the video memory. Sequence of heart images was chosen for the demonstration. These images in quick sequence appear the beating heart - a heart cycle. The animation is solved with different ROM-based images in FPGA without continuous redrawing of the video memory. The demonstration design also uses the mouse which movement can smoothly move the animated images. Other image parameters can be set in real time.
This paper describes a development of systems with limited capacity and limited performance requires a specific approach in the implementation phase especially if the application has high demands which exceed the poss...
详细信息
This paper describes a development of systems with limited capacity and limited performance requires a specific approach in the implementation phase especially if the application has high demands which exceed the possibilities of the used hardware. The paper specifies the compression methods used for video signal processing from the camera, which is scanned and detected in the state consumption meter application. Used hardware Atmega1284p microprocessor is not suitable for video signal processing applications according to its parameters, but it can be realized using the proposed compression methods and specific implementation of algorithms. Design and verification of compression methods is important with respect to the best method choosing for detecting the video signal.
Projective non-negative matrix factorization (PNMF) projects a set of examples onto a subspace spanned by a non-negative basis whose transpose is regarded as the projection matrix. Since PNMF learns a natural parts-ba...
详细信息
ISBN:
(纸本)9781479914821
Projective non-negative matrix factorization (PNMF) projects a set of examples onto a subspace spanned by a non-negative basis whose transpose is regarded as the projection matrix. Since PNMF learns a natural parts-based representation, it has been successfully used in text mining and pattern recognition. However, it is non-trivial to analyze the convergence of the optimization algorithms for PNMF because its objective function is non-convex. In this paper, we propose a Box-constrained PNMF (BPNMF) method to overcome this deficiency of PNMF. In particular, BPNMF introduces an auxiliary variable, i.e., the coefficients of examples, and incorporates the following two types of constraints: 1) each entry of the basis is non-negative and upper-bounded, i.e., box-constrained, and 2) the coefficients equal to the projected points of the examples. The first box constraint makes the basis to be bound and the second equality constraint keeps its equivalence to PNMF. Similar to PNMF, BPNMF is difficult because the objective function is non-convex. To solve BPNMF, we developed an efficient algorithm in the frame of augmented Lagrangian multiplier (ALM) method and proved that the ALM-based algorithm converges to local minima. Experimental results on two face image datasets demonstrate the effectiveness of BPNMF compared with the representative methods.
Semi-supervised clustering aims at boosting the clustering performance on unlabeled samples by using labels from a few labeled samples. Constrained NMF (CNMF) is one of the most significant semi-supervised clustering ...
详细信息
ISBN:
(纸本)9781479914821
Semi-supervised clustering aims at boosting the clustering performance on unlabeled samples by using labels from a few labeled samples. Constrained NMF (CNMF) is one of the most significant semi-supervised clustering methods, and it factorizes the whole dataset by NMF and constrains those labeled samples from the same class to have identical encodings. In this paper, we propose a novel soft-constrained NMF (SCNMF) method by softening the hard constraint in CNMF. Particularly, SCNMF factorizes the whole dataset into two lower-dimensional factor matrices by using multiplicative update rule (MUR). To utilize the labels of labeled samples, SCNMF iteratively normalizes both factor matrices after updating them with MURs to make encodings of labeled samples close to their label vectors. It is therefore reasonable to believe that encodings of unlabeled samples are also close to their corresponding label vectors. Such strategy significantly boosts the clustering performance even when the labeled samples are rather limited, e.g., each class owns only a single labeled sample. Since the normalization procedure never increases the computational complexity of MUR, SCNMF is quite efficient and effective in practices. Experimental results on face image datasets illustrate both efficiency and effectiveness of SCNMF compared with both NMF and CNMF.
We demonstrate real-time computer code improving significantly the quality of images captured by the passive THz imaging system. The code is not only designed for a THz passive device: it can be applied to any kind of...
详细信息
ISBN:
(纸本)9780819492852
We demonstrate real-time computer code improving significantly the quality of images captured by the passive THz imaging system. The code is not only designed for a THz passive device: it can be applied to any kind of such devices and active THz imaging systems as well. We applied our code for computer processing of images captured by various passive THz imaging devices manufactured by different companies. The performance of current version of the computer code is greater than one image per second for a THz image having more than 5000 pixels and 24 bit number representation. processing of THz single image produces about 20 images simultaneously corresponding to various spatial filters. The computer code allows increasing the number of pixels for processed images without noticeable reduction of image quality. The performance of the computer code can be increased many times using parallel algorithms for processing the image. We develop original spatial filters which allow one to see objects with sizes less than 2 cm. The digital imagery is produced by passive THz imaging devices which captured the images of objects hidden under opaque clothes. For images with high noise we develop an approach which results in suppression of the noise after using the computer processing and we obtain the good quality image. The results demonstrate the high efficiency of our approach for the detection of hidden objects and they are a very promising solution for the security problem.
image de-noising in the spatial-temporal domain has been a problem studied in-depth in the field of digital imageprocessing. However complexity of algorithms often leads to high hardware resource usage, or computatio...
详细信息
ISBN:
(纸本)9780819494283
image de-noising in the spatial-temporal domain has been a problem studied in-depth in the field of digital imageprocessing. However complexity of algorithms often leads to high hardware resource usage, or computational complexity and memory bandwidth issues, making their practical use impossible. In our research we attempt to solve these issues with an optimized implementation of a practical spatial-temporal de-noising algorithm Spatial-temporal filtering was performed in Bayer RAW data space, which allowed us to benefit from predictable sensor noise characteristics and reduce memory bandwidth requirements. The proposed algorithm efficiently removes different kinds of noise in a wide range of signal to noise ratios. In our algorithm the local motion compensation is performed in Bayer RAW data space, while preserving the resolution and effectively improving the signal to noise ratios of moving objects. The main challenge for the use of spatial-temporal noise reduction algorithms in video applications is the compromise between the quality of the motion prediction and the complexity of the algorithm and required memory bandwidth In photo and video applications it is very important that moving objects should stay sharp, while the noise is efficiently removed in both the static background and moving objects. Another important use case is the case when background is also non-static as well as the foreground where objects are also moving. Taking into account the achievable improvement in PSNR (on the level of the best known noise reduction techniques, like vBM3D) and low algorithmic complexity, enabling its practical use in commercial video applications, the results of our research can be very valuable.
Efficient deboning is key to optimizing production yield (maximizing the amount of meat removed from a chicken frame while reducing the presence of bones). Many processors evaluate the efficiency of their deboning lin...
详细信息
ISBN:
(纸本)9780819495129
Efficient deboning is key to optimizing production yield (maximizing the amount of meat removed from a chicken frame while reducing the presence of bones). Many processors evaluate the efficiency of their deboning lines through manual yield measurements, which involves using a special knife to scrape the chicken frame for any remaining meat after it has been deboned. Researchers with the Georgia Tech Research Institute (GTRI) have developed an automated vision system for estimating this yield loss by correlating image characteristics with the amount of meat left on a skeleton. The yield loss estimation is accomplished by the system's imageprocessingalgorithms, which correlates image intensity with meat thickness and calculates the total volume of meat remaining. The team has established a correlation between transmitted light intensity and meat thickness with an R-2 of 0.94. Employing a special illuminated cone and targeted software algorithms, the system can make measurements in under a second and has up to a 90-percent correlation with yield measurements performed manually. This same system is also able to determine the probability of bone chips remaining in the output product. The system is able to determine the presence/absence of clavicle bones with an accuracy of approximately 95 percent and fan bones with an accuracy of approximately 80%. This paper describes in detail the approach and design of the system, results from field testing, and highlights the potential benefits that such a system can provide to the poultry processing industry.
Unimodal biometric systems have to contend with a variety of problems such as noisy data, intra-class variations, restricted degrees of freedom, non-universality, spoof attacks, and unacceptable error rates. Some of t...
详细信息
ISBN:
(纸本)9781467348669;9781467348652
Unimodal biometric systems have to contend with a variety of problems such as noisy data, intra-class variations, restricted degrees of freedom, non-universality, spoof attacks, and unacceptable error rates. Some of these limitations can be addressed by deploying multimodal biometric systems that integrate the evidence presented by multiple sources of information. This paper discusses a multibiometric based authentication system with GUI interface. The proposal includes an extraction algorithm to extract features from given finger print and a palm print feature extraction algorithm to extract palm print features. Then an integration of these two algorithms to perform a multibiometric authentication was done. By processing the test image of a person, the identity of the person is displayed with his/her own image. By the proposed fingerprint and palm print algorithm, it is found that it has less computation time and occupies less memory space compared to the existing algorithms. The Fingerprint and palm print matching results for the proposed methods are validated and the system integrity for all the cases were evaluated.
Modern visual quality metrics take into account different peculiarities of the Human visual System (HvS). One of them is described by the Weber-Fechner law and deals with the different sensitivity to distortions in im...
详细信息
ISBN:
(纸本)9780819494283
Modern visual quality metrics take into account different peculiarities of the Human visual System (HvS). One of them is described by the Weber-Fechner law and deals with the different sensitivity to distortions in image fragments with different local mean values (intensity, brightness). We analyze how this property can be incorporated into a metric PSNR-HvS-M. It is shown that some improvement of its performance can be provided. Then, visual quality of color images corrupted by three types of i.i.d. noise (pure additive, pure multiplicative, and signal dependent, Poisson) is analyzed. Experiments with a group of observers are carried out for distorted color images created on the basis of TID2008 database. Several modern HvS-metrics are considered. It is shown that even the best metrics are unable to assess visual quality of distorted images adequately enough. The reasons for this deal with the observer's attention to certain objects in the test images, i.e., with semantic aspects of vision, which are worth taking into account in design of HvS-metrics.
暂无评论