Biometric recognition systems use automated processes in which the stored templates are used to match with the live biometrics traits. Correlation filter is a promising technique for such type of matching in which the...
详细信息
ISBN:
(纸本)9789811021046;9789811021039
Biometric recognition systems use automated processes in which the stored templates are used to match with the live biometrics traits. Correlation filter is a promising technique for such type of matching in which the correlation output is obtained at the location of the target in the correlation plane. Among all kinds of available biometrics, finger knuckle print (FKP) has usable significant feature, which can be utilized for person identification and verification. Motivated by the utility of the FKP, this paper presents a new scheme for person authentication using FKP based on advanced correlation filters (ACF). The numerical experiments have been carried out on the Poly U FKP database to evaluate the performance of the designed filters. The obtained results demonstrate the effectiveness and show better discrimination ability between the genuine and the imposter population for the proposed scheme.
Biometric systems are used for identification-and verification-based applications such as e-commerce, physical access control, banking, and forensic. Among several kinds of biometric identifiers, finger knuckle print ...
详细信息
ISBN:
(纸本)9789811021046;9789811021039
Biometric systems are used for identification-and verification-based applications such as e-commerce, physical access control, banking, and forensic. Among several kinds of biometric identifiers, finger knuckle print (FKP) is a promising biometric trait in the present scenario because of its textural features. In this paper, wavelet transform (WT) and Gabor filters are used to extract features for FKP. The WT approach decomposes the FKP feature into different frequency subbands, whereas Gabor filters are used to capture the orientation and frequency from the FKP. The information of horizontal subbands and content information of Gabor representations are both utilized to make the FKP template, and are stored for verification systems. The experimental results show that wavelet families along with Gabor filtering give a best FKP recognition rate of 96.60 %.
The problem of searching a digital image in a very huge database is called content-based image retrieval (CBIR). Texture represents spatial or statistical repetition in pixel intensity and orientation. When abnormal c...
详细信息
ISBN:
(纸本)9789811021046;9789811021039
The problem of searching a digital image in a very huge database is called content-based image retrieval (CBIR). Texture represents spatial or statistical repetition in pixel intensity and orientation. When abnormal cells form within the brain is called brain tumor. In this paper, we have developed a texture feature extraction of MRI brain tumor image retrieval. There are two parts, namely feature extraction process and classification. First, the texture features are extracted using techniques like curvelet transform, contourlet transform, and Local Ternary Pattern (LTP). Second, the supervised learning algorithms like Deep Neural Network (DNN) and Extreme Learning Machine (ELM) are used to classify the brain tumor images. The experiment is performed on a collection of 1000 brain tumor images with different modalities and orientations. Experimental results reveal that contourlet transform technique provides better than curvelet transform and local ternary pattern.
Scene classification remains a challenging task in computervision applications due to a wide range of intraclass and interclass variations. A robust feature extraction technique and an effective classifier are requir...
详细信息
ISBN:
(纸本)9789811021046;9789811021039
Scene classification remains a challenging task in computervision applications due to a wide range of intraclass and interclass variations. A robust feature extraction technique and an effective classifier are required to achieve satisfactory recognition performance. Herein, we propose a nonregularized state preserving extreme learning machine (NSPELM) to perform scene classification tasks. We employ a Bag-of-Words (BoW) model for feature extraction prior to performing the classification task. The BoW feature is obtained based on a regular grid method for point selection and Speeded Up Robust Features (SURF) technique for feature extraction on the selected points. The performance of NSPELM is tested and evaluated on three standard scene category classification datasets. The recognition accuracy is compared with the standard extreme learning machine classifier and it shows that the proposed NSPELM algorithm yields better accuracy.
Visual bag of words model have been applied in the recent past for the purpose of content-based image retrieval. In this paper, we propose a novel assignment model of visual words for representing an image patch. In p...
详细信息
ISBN:
(纸本)9789811021046;9789811021039
Visual bag of words model have been applied in the recent past for the purpose of content-based image retrieval. In this paper, we propose a novel assignment model of visual words for representing an image patch. In particular, a vector is used to represent an image patch with its elements denoting the affinities of the patch to belong to a set of closest/most influential visual words. We also introduce a dissimilarity measure, consisting of two terms, for comparing a pair of image patches. The first term captures the difference in affinities of the patches to belong to the common set of influential visual words. The second term checks the number of visual words which influences only one of the two patches and penalizes the measure accordingly. Experimental results on the publicly available COIL-100 image database clearly demonstrates the superior performance of the proposed content-based image retrieval (CBIR) method over some similar existing approaches.
A novel region of interest (ROI) segmentation for detection of Glioblastoma multiforme (GBM) tumor in magnetic resonance (MR) images of the brain is proposed using a two-stage thresholding method. We have defined mult...
详细信息
ISBN:
(纸本)9789811021046;9789811021039
A novel region of interest (ROI) segmentation for detection of Glioblastoma multiforme (GBM) tumor in magnetic resonance (MR) images of the brain is proposed using a two-stage thresholding method. We have defined multiple intervals for multilevel thresholding using a novel meta-heuristic optimization technique called Discrete Curve Evolution. In each of these intervals, a threshold is selected by bi-level Otsu's method. Then the ROI is extracted from only a single seed initialization, on the ROI, by the user. The proposed segmentation technique is more accurate as compared to the existing methods. Also the time complexity of our method is very low. The experimental evaluation is provided on contrast-enhanced T1-weighted MRI slices of three patients, having the corresponding ground truth of the tumor regions. The performance measure, based on Jaccard and Dice indices, of the segmented ROI demonstrated higher accuracy than existing methods.
This paper proposes a novel and Robust Parametric Twin Support Vector Machine (RPTWSVM) classifier to deal with the heteroscedastic noise present in the human activity recognition framework. Unlike Par-nu-SVM, RPTWSVM...
详细信息
ISBN:
(纸本)9789811021046;9789811021039
This paper proposes a novel and Robust Parametric Twin Support Vector Machine (RPTWSVM) classifier to deal with the heteroscedastic noise present in the human activity recognition framework. Unlike Par-nu-SVM, RPTWSVM proposes two optimization problems where each one of them deals with the structural information of the corresponding class in order to control the effect of heteroscedastic noise on the generalization ability of the classifier. Further, the hyperplanes so obtained adjust themselves in order to maximize the parametric insensitive margin. The efficacy of the proposed framework has been evaluated on standard UCI benchmark datasets. Moreover, we investigate the performance of RPTWSVM on human activity recognition problem. The effectiveness and practicability of the proposed algorithm have been supported with the help of experimental results.
Although researchers have proposed different kinds of techniques for background subtraction, we still need to produce more efficient algorithms in terms of adaptability to multimodal environments. We present a new bac...
详细信息
ISBN:
(纸本)9789811021046;9789811021039
Although researchers have proposed different kinds of techniques for background subtraction, we still need to produce more efficient algorithms in terms of adaptability to multimodal environments. We present a new background modeling algorithm based on temporal-local sample density outlier detection. We use the temporal-local densities of pixel samples as the decision measurement for background classification, with which we can deal with the dynamic backgrounds more efficiently and accurately. Experiment results have shown the outstanding performance of our proposed algorithm with multimodal environments.
With the rapid growth of digital libraries, e-governance and Internet applications, huge volume of documents are being generated, communicated and archived in the compressed form to provide better storage and transfer...
详细信息
ISBN:
(纸本)9789811021046;9789811021039
With the rapid growth of digital libraries, e-governance and Internet applications, huge volume of documents are being generated, communicated and archived in the compressed form to provide better storage and transfer efficiencies. In such a large repository of compressed documents, the frequently used operations like keyword searching and document retrieval have to be carried out after decompression and subsequently with the help of an OCR. Therefore developing keyword spotting technique directly in compressed documents is a potential and challenging research issue. In this backdrop, the paper presents a novel approach for searching keywords directly in run-length compressed documents without going through the stages of decompression and OCRing. The proposed method extracts simple and straightforward font size invariant features like number of run transitions and correlation of runs over the selected regions of test words, and matches with that of the user queried word. In the subsequent step, based on the matching score, the keywords are spotted in the compressed document. The idea of decompression-less and OCR-less word spotting directly in compressed documents is the major contribution of this paper. The method is experimented on a data set of compressed documents and the preliminary results obtained validate the proposed idea.
Information security has been one of the major concerns in the field of communication today. Steganography is one of the ways used for secure communication, where people cannot feel the existence of the secret informa...
详细信息
ISBN:
(纸本)9789811021046;9789811021039
Information security has been one of the major concerns in the field of communication today. Steganography is one of the ways used for secure communication, where people cannot feel the existence of the secret information. The need for parallelizing an algorithm increases, as any good algorithm becomes a failure if the computation time taken by it is large. In this paper two parallel algorithms-parallel RSA (Rivest Shamir Adleman) cryptosystem with 2D-DCT (Discrete Cosine Transformation) steganography and parallel chaotic 2D-DCT steganography-have been proposed. The performance of both algorithms for larger images is determined and chaotic steganography is proved to be an efficient algorithm for larger messages. The parallelized version also proves to have reduced processing time than serial version with the speed-up ratios of 1.6 and 3.18.
暂无评论