Face recognition technique nowadays is emerging as the most significant and challenging aspects in terms of security for identification of images in various fields viz. banking, police records, biometric etc. other th...
详细信息
ISBN:
(纸本)9781467345286
Face recognition technique nowadays is emerging as the most significant and challenging aspects in terms of security for identification of images in various fields viz. banking, police records, biometric etc. other than an individual's thumb and documented identification proofs. Till date for efficient net banking to be initiated, one has to provide the appropriate user name and password for purpose of authentication. This project introduces a vehicle to take a step forward in easy and more reliable authentication of an individual by providing Face Image along with User Name and Password to the system. In this an individual's face is identified by biometric authentication support with which, only a person whose account is, can access it. However while transferring this sensitive data of user image, from client machine to bank server it has to be protected from hackers and intruders from manhandling it, hence it is transferred using covert communication called Wavelet Decomposition based steganography. As face images are affected by different expressions, poses, occlusions, illuminations and aging over a period of time and it differs from the same person than those from different ones is the main difficult task in face recognition. Whenever image information is jointly co-ordinated in three aspects viz. image space, scale and orientation domains they carry much higher clues than seen in each domain individually. In the proposed method combination of Local Binary pattern (LBP) and Gabor features are used to increase the face recognition performance significantly to compare individual's face presentations.
Decision rules are one of the most expressive languages for machinelearning. In this paper we present Adaptive Model Rules (AMRules), the first streaming rule learning algorithm for regression problems. In AMRules th...
详细信息
Authorship attribution (AA) or author identification refers to the problem of identifying the author of an unseen text. From the machinelearning point of view, AA can be viewed as a multiclass, single-label text-cate...
详细信息
ISBN:
(纸本)9781479920938
Authorship attribution (AA) or author identification refers to the problem of identifying the author of an unseen text. From the machinelearning point of view, AA can be viewed as a multiclass, single-label text-categorization task. This task is based on this assumption that the author of an unseen text can be discriminated by comparing some textual features extracted from that unseen text with those of texts with known authors. In this paper the effects of 29 different textual features on the accuracy of author identification on Persian corpora in 30 different scenarios are evaluated. Several classification algorithms have been used on corpora with 2, 5, to, 20 and 40 different authors and a comparison is performed. The evaluation results show that the information about the used words and verbs are the most reliable criteria for AA tasks and also NLP based features are more reliable than BOW based features.
learning Materials are structured as learning Objects and are available in learning Object Repository(LOR) which are used in various courses of an Elearning environment. learning Management System aggregates these obj...
详细信息
ISBN:
(纸本)9781467329071;9781467329064
learning Materials are structured as learning Objects and are available in learning Object Repository(LOR) which are used in various courses of an Elearning environment. learning Management System aggregates these objects found in LOR, provides an infrastructure and platform through which learning content is delivered and managed. Adaptation, personalization, usage statistics are some of the LMS functionality. But due to the exponential availability of learning Objects, it leads to increase in difficulty to find the right resource to the user based on the context of learning or his/her preferences. When we search through keywords it results in huge quantity of information being displayed. In this paper we are considering the Search patterns of the users stored in search logs and based on it association rules are generated using Frequent pattern Tree. We can generate a list of Frequent learning objects using frequent item set mining approach FP-Tree, so that a reduced, appropriate and relevant objects can be delivered to the users.
Network anomaly detection aims to detect patterns in a given network traffic data that do not conform to an established normal behavior. Distinguishing different anomaly patterns from large amount of data can be a cha...
详细信息
ISBN:
(纸本)9781479901814;9781467364713
Network anomaly detection aims to detect patterns in a given network traffic data that do not conform to an established normal behavior. Distinguishing different anomaly patterns from large amount of data can be a challenge, let alone visualizing them in a comparative perspective. Recently, the unsupervised learning method such as the K-means [3], self-organizing map (SOM) [2], and growing hierarchical selforganizing map (GHSOM) [1] have been shown to be able to facilitate network anomaly detection [4][5]. However, there is no study addressing both mining and detecting task. This study leverages the advantage of GHSOM to analyze the network traffic data and visualize the distribution of attack patterns with hierarchical relationship. In the mining stage, the geometric distances between each pattern and its descriptive information are revealed in the topological space. The density and the sample size of each node can help to detect anomalous network traffic. In the detecting stage, this study extends the traditional GHSOM and uses the support vector machine (SVM) [6] to classify network traffic data into the predefined categories. The proposed approach achieves (1) help understand the behaviors of anomalous network traffic data (2) provide effective classification rule to facilitate network anomaly detection and (3) accumulate network anomaly detection knowledge for both mining and detecting purpose. The public dataset and the private dataset are used to evaluate the proposed approach. The expected result is to confirm that the proposed approach can help understand network traffic data, and the detecting mechanism is effective for identifying anomalous behavior.
This paper presents a new sequential multi-task learning model with the following functions: one-pass incremental learning, task allocation, knowledge transfer, task consolidation, learning of multi-label data, and ac...
详细信息
ISBN:
(纸本)9783642407277;9783642407284
This paper presents a new sequential multi-task learning model with the following functions: one-pass incremental learning, task allocation, knowledge transfer, task consolidation, learning of multi-label data, and active learning. This model learns multi-label data with incomplete task information incrementally. When no task information is given, class labels are allocated to appropriate tasks based on prediction errors;thus, the task allocation sometimes fails especially at the early stage. To recover from the misallocation, the proposed model has a backup mechanism called task consolidation, which can modify the task allocation not only based on prediction errors but also based on task labels in training data (if given) and a heuristics on multi-label data. The experimental results demonstrate that the proposed model has good performance in both classification and task categorization.
Based on LS-SVM pattern recognizer, this paper develops an intelligent method for solving the problem of change-point detection, and the proposed model is applied to detect change-point of process mean-shift in auto-c...
详细信息
ISBN:
(纸本)9783037855782
Based on LS-SVM pattern recognizer, this paper develops an intelligent method for solving the problem of change-point detection, and the proposed model is applied to detect change-point of process mean-shift in auto-correlated time series process. In this research, LS-SVM algorithm and moving window method are used to detect the location of the mean shift signal, the LS-SVM pattern recognizer is designed and the performance of the recognizer is evaluated in terms of Accuracy Rate. Results of simulation experiment show that the proposed intelligent model is an effective method to detect change-point in ARMA data series.
This book constitutes the refereed proceedings of the 9th internationalconference on Intelligent Computing, ICIC 2013, held in Nanning, China, in July 2013. The 192 revised full papers presented in the three volumes ...
ISBN:
(数字)9783642396786
ISBN:
(纸本)9783642396779;9783642396786
This book constitutes the refereed proceedings of the 9th internationalconference on Intelligent Computing, ICIC 2013, held in Nanning, China, in July 2013. The 192 revised full papers presented in the three volumes LNCS 7995, LNAI 7996, and CCIS 375 were carefully reviewed and selected from 561 submissions. The papers in this volume (CCIS 375) are organized in topical sections on Neural Networks; Systems Biology and Computational Biology; Computational Genomics and Proteomics; Knowledge Discovery and datamining; Evolutionary learning and Genetic Algorithms; machinelearning Theory and Methods; Biomedical Informatics Theory and Methods; Particle Swarm Optimization and Niche Technology; Unsupervised and Reinforcement learning; Intelligent Computing in Bioinformatics; Intelligent Computing in Finance/Banking; Intelligent Computing in Petri Nets/Transportation Systems; Intelligent Computing in Signal Processing; Intelligent Computing in patternrecognition; Intelligent Computing in Image Processing; Intelligent Computing in Robotics; Intelligent Computing in Computer Vision; Special Session on Biometrics System and Security for Intelligent Computing; Special Session on Bio-inspired Computing and Applications; Computer Human Interaction using Multiple Visual Cues and Intelligent Computing; Special Session on Protein and Gene Bioinformatics: Analysis, Algorithms and Applications.
one of the most recently developed face recognition technique has utilized PSO-SVM, this method lacks in the initial phase of the PSO technique. That is in PSO;initially the populations are generated in random manner....
详细信息
ISBN:
(纸本)9781479902699;9781479902675
one of the most recently developed face recognition technique has utilized PSO-SVM, this method lacks in the initial phase of the PSO technique. That is in PSO;initially the populations are generated in random manner. Due to this random process, the population results may also be in random. Thus, it is not certain that this method will produce precise result. Hence to avoid this drawback, a modified face recognition method is proposed in this paper. Here, a new face recognition method based on Opposition based PSO with SVM (OPSO-SVM) is introduced. To accomplish the face recognition with our proposed OPSO-SVM, initially feature extraction process is carried out on the image database. In the feature extraction process, the efficient features are extracted and then given to the SVM training and testing process. In OPSO, the populations are generated in two ways: one is random population as same as the normal PSO technique and the other is opposition population, which is based on the random population values. The optimized parameters in SVM by OPSO efficiently perform the face recognition process. Two human face databases FERET and YALE are utilized to analyze the performance of our proposed OPSO-SVM technique and also this OPSO-SVM is compared with PSO-SVM and standard SVM techniques.
Dimensionality reduction from an information system is a problem of eliminating unimportant attributes from the original set of attributes while avoiding loss of information in datamining process. In this process, a ...
详细信息
ISBN:
(纸本)9783319029573;9783319029580
Dimensionality reduction from an information system is a problem of eliminating unimportant attributes from the original set of attributes while avoiding loss of information in datamining process. In this process, a subset of attributes that is highly correlated with decision attributes is selected. In this paper, performance of the great deluge algorithm for rough set attribute reduction is investigated by comparing the method with other available approaches in the literature in terms of cardinality of obtained reducts (subsets), time required to obtain reducts, number of calculating dependency degree functions, number of rules generated by reducts, and the accuracy of the classification. An interactive interface is initially developed that user can easily select the parameters for reduction. This user interface is developed toward visual datamining. The carried out model has been tested on the standarddatasets available in the UCI machinelearning repository. Experimental results show the effectiveness of the method especially with relation to the time and accuracy of the classification using generated rules. The method outperformed other approaches in M-of-N, Exactly, and LED datasets with achieving 100% accuracy.
暂无评论