Real-time computing system attracts more and more attention in both academic researches and industrial applications. One of the real-time computing systems, Apache Storm, because of its characteristics of stream proce...
详细信息
ISBN:
(纸本)9781538629345
Real-time computing system attracts more and more attention in both academic researches and industrial applications. One of the real-time computing systems, Apache Storm, because of its characteristics of stream processing and high fault tolerance, is widely used for machine learning and distributed remote process call (RPC), etc. However, the existing approaches to decompose topology for Storm cannot ensure an optimized performance. In this paper, we propose an adaptive topology decomposition algorithm for Storm where topology decomposition based on cluster status and components of topology can be performed at run time. We have evaluated the processing performance and the load balancing of the algorithm. The evaluation results indicate that the proposed algorithm has better performances on task processing and load-balancing than the existing algorithms.
Biometrics are automated methods of identifying a person or verifying the identity of a person based on a physiological or behavioral characteristic. Biometric-based authentication is the automatic identity verificati...
详细信息
ISBN:
(纸本)9781538618868
Biometrics are automated methods of identifying a person or verifying the identity of a person based on a physiological or behavioral characteristic. Biometric-based authentication is the automatic identity verification, based on individual physiological or behavioral characteristics, such as fingerprints, voice, face and iris. Since biometrics is extremely difficult to forge and cannot be forgotten or stolen, Biometric authentication offers a convenient, accurate, irreplaceable and high secure alternative for an individual, which makes it has advantages over traditional cryptography-based authentication schemes. In the recent years, this biometrics has become most identifiable method of recognizing the person in all round fields & is gaining prominence in the defense, banking, retail, consumer product, examinations, etc. In this context, a sincere effort is being made to develop some novel method of identifying a person in form of developing a unique biometric identification system that has got some good advantages over the existing methodologies in the current scenario. Fast processingalgorithms are being developed by keeping into mind the speed of computation (3-4 secs). The huge database is being considered to start with, followed by the pre-processing, segmentation, normalization, feature extraction & the matching scenario with the final matched results. The features of the input eye image are compared with that of the features that is already stored in the database and if it matches, the corresponding eye image is identified otherwise it remains unidentified. In our research work, since a bitwise comparison is necessary, we have chosen the hamming distance is chosen for identification. One advantage of the methodology developed in the research work considered is the speed of computation & the simplicity of the biometric recognition system developed so that it is user friendly and any layman can operate it. In this paper, we present the methodology, i.e., the
This study investigates the impulsive stabilization problem of positive systems with time-varying delays. A new time-varying weighted copositive Lyapunov function is constructed. Sufficient stabilization conditions on...
详细信息
ISBN:
(纸本)9781538626795
This study investigates the impulsive stabilization problem of positive systems with time-varying delays. A new time-varying weighted copositive Lyapunov function is constructed. Sufficient stabilization conditions on the upper and lower bounds of impulsive intervals are established using the convex combination technique. Under the proposed conditions, the positivity and exponential stability of the corresponding closed-loop system can be guaranteed. Based on the linear programming (LP) technique, a systematic design procedure is presented for the impulsive controller. Finally, a numerical example is provided to demonstrate the effectiveness of the theoretical result.
With the integration of face recognition technology into important identity applications, it is imperative that the effects of facial aging on face recognition performance are thoroughly understood. As face recognitio...
详细信息
ISBN:
(纸本)9781538607336
With the integration of face recognition technology into important identity applications, it is imperative that the effects of facial aging on face recognition performance are thoroughly understood. As face recognition systems evolve and improve, they should be periodically re-evaluated on large-scale longitudinal face datasets. In our study, we evaluate the performance of two state-of-the-art commercial off the shelf (COTS) face recognition systems on two large-scale longitudinal datasets of mugshots of repeat offenders. The largest of these two datasets has 147,784 images of 18,007 subjects with an average of 8 images per subject over an average time span of 8.5 years. We fit multi-level statistical models to genuine comparison scores (similarity between images of the same face) from the two COTS face matchers. This allows us to analyze the degradation in recognition performance due to elapsed time between a probe (query) and its enrollment (gallery) image. We account for face image quality to obtain a better estimate of trends due to aging, and analyze whether longitudinal trends in genuine scores differ by subject gender and race. Based on the results of our statistical model, we infer that the state-of-the-art COTS matchers can verify 99% of the subjects at a false accept rate (FAR) of 0.01% for up to 10.5 and 8.5 years of elapsed time. Beyond this time lapse of 8.5 years, there is a significant loss in face recognition accuracy. This study extends and confirms the findings of earlier longitudinal studies on face recognition.
We present efficient Schur parametrization algorithms for a subclass of near-stationary second-order stochastic processes which we call p-stationary processes. This approach allows for complexity reduction of the gene...
详细信息
ISBN:
(纸本)9781509063451
We present efficient Schur parametrization algorithms for a subclass of near-stationary second-order stochastic processes which we call p-stationary processes. This approach allows for complexity reduction of the general linear Schur algorithm in a uniform way and results in a hierachical class of the algorithms, suitable for efficient implementations, being a good starting point for nonlinear generalizations.
Auditing of certificates and bills images is pervasive in ERP systems. However, the scanned or camera-captured images sending to an ERP system are not always of good quality. In order to automate the auditing of certi...
详细信息
ISBN:
(纸本)9781538620083
Auditing of certificates and bills images is pervasive in ERP systems. However, the scanned or camera-captured images sending to an ERP system are not always of good quality. In order to automate the auditing of certificates and bills, and to alleviate the low recognition rate caused by the low quality image in all kinds of certificates and bills automatic analysis and processing system, this paper proposes a method for detecting and filtering out images with low quality, leaving only high quality images, to improve the recognition rate of the auditing of certificates and bills. Unlike other image quality assessment algorithms, which only deal with the blur or noise, the proposed method comprehensively and practically considers a variety of key factors (clarity, color-bias, noise, abnormal brightness areas etc.) which affect the image quality in the process of certificates and bills assessment. The method is applied to detect image quality in certificates and bills automatic verification system, and has achieved good unbiasedness and high sensitivity in real-world ERP applications.
The article deals with the problem of segmentation of digital images, which is one of the main tasks in the field of digital imageprocessing (IP) and computer vision. To solve this problem, an algorithm was proposed ...
The article deals with the problem of segmentation of digital images, which is one of the main tasks in the field of digital imageprocessing (IP) and computer vision. To solve this problem, an algorithm was proposed based on the use of a concept based on the theory of fuzzy sets. The main idea of the proposed algorithm is the formation of subsets of interconnected pixels based on the fuzzy-to-mean method. A distinctive feature of the proposed algorithm is the definition of a set of features that define areas with similar characteristics in the space of the characteristic features of the analyzed image. The proposed segmentation algorithm (SA) consists of two stages: 1) the formation of characteristic features for all channels of the base color; 2) clustering of image elements. The practical significance of the obtained results lies in the fact that the developed models of algorithms can be used in various applied problems, where the classification of objects represented as images is provided. To test the efficiency of the developed algorithm, experimental studies were carried out in solving a number of applied problems related to color image segmentation, in particular, license plate recognition problems.
Uterine cervical cancer is the second most common cancer in women worldwide. The accuracy of colposcopy is highly dependent on the physicians individual skills. In expert hands, colposcopy has been reported to have a ...
详细信息
ISBN:
(纸本)9789526865300
Uterine cervical cancer is the second most common cancer in women worldwide. The accuracy of colposcopy is highly dependent on the physicians individual skills. In expert hands, colposcopy has been reported to have a high sensitivity (96%) and a low specificity (48%) when differentiating abnormal tissues. This leads to a significant interest to activities aimed at the new diagnostic systems and new automatic methods of coloposcopic images analysis development. The presented paper is devoted to developing method based on analyses fluorescents images obtained with different excitation wavelength. The sets of images were obtained in clinic by multispectral colposcope LuxCol. The images for one patient includes: images obtained with white light illumination and with polarized white light;fluorescence image obtained by excitation at wavelength of 360nm, 390nm, 430nm and 390nm with 635 nm laser. Our approach involves images acquisition, imageprocessing, features extraction, selection of the most informative features and the most informative image types, classification and pathology map creation. The result of proposed method is the pathology map - the image of cervix shattered on the areas with the definite diagnosis such as norm, CNI (chronic nonspecific inflammation), CIN(cervical intraepithelial neoplasia). The obtained result on the border CNI/CIN sensitivity is 0.85, the specificity is 0.78. Proposed algorithms gives possibility to obtain correct differential pathology map with probability 0.8. Obtained results and classification task characteristics shown possibility of practical application pathology map based on fluorescents images.
In this paper, we address the problem of parametric space dimension reduction in the interpolation of multidimensional signals task. We develop adaptive parameterized interpolation algorithms for multidimensional sign...
In this paper, we address the problem of parametric space dimension reduction in the interpolation of multidimensional signals task. We develop adaptive parameterized interpolation algorithms for multidimensional signals. We perform a dimension reduction of the parameter space to reduce the complexity of optimizing such algorithms. The dependences of the samples inside the signal sections and between the signal sections are taken into account in various ways to reduce the dimension. We consider the dependencies between the signal sections through the approximation algorithm for the sections. We take into account the sample dependencies inside sections due to an adaptive parameterized interpolation algorithm. As a result, we solve the optimization problem of an adaptive interpolator in the parameter space of lower dimension for each signal section separately. To study the effectiveness of adaptive interpolators, we perform computational experiments using real-world multidimensional signals. Experimental results showed that the proposed interpolator improves the efficiency of the compression method up to 10% compared with the prototype algorithm.
Back propagation neural network(BP neural network) is a type of multi-layer feed forward network which spread positively, while the error spread backwardly. Since BP network has advantages in learning and storing the ...
详细信息
ISBN:
(数字)9781510609921
ISBN:
(纸本)9781510609914;9781510609921
Back propagation neural network(BP neural network) is a type of multi-layer feed forward network which spread positively, while the error spread backwardly. Since BP network has advantages in learning and storing the mapping between a large number of input and output layers without complex mathematical equations to describe the mapping relationship, it is most widely used. BP can iteratively compute the weight coefficients and thresholds of the network based on the training and back propagation of samples, which can minimize the error sum of squares of the network. Since the boundary of the computed tomography (CT) heart images is usually discontinuous, and it exist large changes in the volume and boundary of heart images, The conventional segmentation such as region growing and watershed algorithm can't achieve satisfactory results. Meanwhile, there are large differences between the diastolic and systolic images. The conventional methods can'tt accurately classify the two cases. In this paper, we introduced BP to handle the segmentation of heart images. We segmented a large amount of CT images artificially to obtain the samples, and the BP network was trained based on these samples. To acquire the appropriate BP network for the segmentation of heart images, we normalized the heart images, and extract the gray-level information of the heart. Then the boundary of the images was input into the network to compare the differences between the theoretical output and the actual output, and we reinput the errors into the BP network to modify the weight coefficients of layers. Through a large amount of training, the BP network tend to be stable, and the weight coefficients of layers can be determined, which means the relationship between the CT images and the boundary of heart.
暂无评论