Handwriting recognition is the most popular area of research which provides major contribution to the trending technology - mobile computing. This paper provides technical details for the implementation of Handwriting...
详细信息
Handwriting recognition is the most popular area of research which provides major contribution to the trending technology - mobile computing. This paper provides technical details for the implementation of Handwriting recognition system. The techniques suitable for offline and online handwriting recognition system are also discussed. Some of the pre-processing and classification algorithms like normalization; re-sampling, Principal Component Analysis (PCA) and Dynamic Time Wrapping (DTW) are presented in this paper.
The human body exhibits many vital signs, such as heart rate (HR) and respiratory rate (RR) used to assess fitness and health. Vital signs are typically measured by a trained health professional and may be difficult f...
详细信息
ISBN:
(纸本)9781509047611;9781509047604
The human body exhibits many vital signs, such as heart rate (HR) and respiratory rate (RR) used to assess fitness and health. Vital signs are typically measured by a trained health professional and may be difficult for individuals to accurately measure at home. Clinic visits are therefore needed with associated burdens of cost and time spent waiting in long queues. The widespread use of smart phones with video capability presents an opportunity to create non-invasive applications for assessment of vital signs. Over the past decade, several researchers have worked on assessing vital signs from video, including HR, RR and other parameters such as anemia and blood oxygen saturation (SpO2). This paper reviews the different image and video processingalgorithms developed for vital signs assessment through non-contact methods, and outline the key remaining challenges in the field which can be used as potential research topics. The CHROM algorithm produces highest accuracy in detecting the signals from rPPG. There are different challenges of handling large database and motion stabilization which is not provided by any algorithm, this is main area of research in rPPG.
In recent years, image encryption has attracted much attention. Particularly, due to large data capacity and high correlation among pixels, chaos-based image encryption algorithms are more suitable to be applied in th...
详细信息
ISBN:
(数字)9783662498316
ISBN:
(纸本)9783662498316;9783662498293
In recent years, image encryption has attracted much attention. Particularly, due to large data capacity and high correlation among pixels, chaos-based image encryption algorithms are more suitable to be applied in the digital image encryption. In this paper, we propose a novel chaotic image encryption algorithm, in which novel multiple chaotic systems and efficient self-adaptive model are initially mingled to enhance the security. Different from conventional algorithms, plaintext participates in the generation of cryptograph in a new way, which follows the idea from the perceptron model. The proposed algorithm enlarges the key space, enhances the randomness of the algorithm, and resists the differential attack effectively. Simulation results are demonstrated that proposed algorithm possesses the high security for the main current attacks, which is an excellent candidate for practical applications of image encryption.
This article describes a novel real-time algorithm for the purpose of extracting box-like structures from RGBD image data. In contrast to conventional approaches, the proposed algorithm includes two novel attributes: ...
详细信息
ISBN:
(纸本)9781510601086
This article describes a novel real-time algorithm for the purpose of extracting box-like structures from RGBD image data. In contrast to conventional approaches, the proposed algorithm includes two novel attributes: (1) it divides the geometric estimation procedure into subroutines having atomic incremental computational costs, and (2) it uses a generative "Block World" perceptual model that infers both concave and convex box elements from detection of primitive box substructures. The end result is an efficient geometry processing engine suitable for use in real-time embedded systems such as those on an UAVs where it is intended to be an integral component for robotic navigation and mapping applications.
Hyperspectral remote sensing is becoming an active research field in the last decades thanks to the availability of efficient machine learning algorithms and also to the ever-increasing computation power. However, the...
详细信息
ISBN:
(纸本)9781509061839
Hyperspectral remote sensing is becoming an active research field in the last decades thanks to the availability of efficient machine learning algorithms and also to the ever-increasing computation power. However, there exist application domains (e.g., embedded applications) in which the deployment of this kind of systems becomes unfeasible due to the high requirements related to the size, power consumption or processing speed. A way to overcome this trouble consists on using any method able to scale-down the dimensionality of the problem and/or to reduce the complexity of the machine learning models. In this paper, we propose the use of a multiobjective genetic algorithm to minimize both the dimension of the input space and the size of the machine learning model. In particular, we have developed a hyperspectral image classifier based on an Extreme Learning Machine (ELM) for which the number of system inputs (dimensionality) and the number of hidden neurons are minimized without decreasing its performance. The system is evaluated by using a known benchmark dataset.
Near duplicate image detection needs the matching of a bit altered images to the original image. This will help in the detection of forged images. A great deal of effort has been dedicated to visual applications that ...
详细信息
ISBN:
(纸本)9781509025527
Near duplicate image detection needs the matching of a bit altered images to the original image. This will help in the detection of forged images. A great deal of effort has been dedicated to visual applications that need efficient image similarity metrics and signature. Digital images can be easily edited and manipulated owing to the great functionality of imageprocessing software. This leads to the challenge of matching somewhat altered images to their originals, which is termed as near duplicate image detection. This paper discusses the literature reviewed on the development of several image matching algorithms. This paper encompasses 2 sections. Section 1 is the introduction. Section 2 discusses the literature reviewed on the development of image matching algorithms.
image compression is a widely adopted technique used for effective image storage and transmission over open communication channels in cyber-physical systems. Standard cryptographic algorithms are usually used to reach...
详细信息
image compression is a widely adopted technique used for effective image storage and transmission over open communication channels in cyber-physical systems. Standard cryptographic algorithms are usually used to reach this goal. Therefore, in order to organize effective and secure storage of images it is required to follow two independent and sequential procedures - compression and encryption. In the scenario of interest, it is needed to do compression and encryption transformations in reverse order to restore the original image, i.e. it is necessary to have a so-called “code book” similarly as for encryption and decryption to have a secret key. An effective way of combining these procedures for digital images is proposed in this manuscript. This research is mainly focused on the compression methods that consider significance of the initial multimedia object (for example image) different parts to increase the quality of resulting (decompressed) image. One of the most effective approaches for this task is to utilize error-correcting codes (ECC) that allow to limit the number of resulting errors (distortion) as well as to ensure the value of resulting compression ratio. Application of such codes enable to distribute errors that are added during the processing procedure according to predefined significance of the initial multimedia object elements. The approach that is based on weighted Hamming metric that makes it possible to guaranty the limitation of maximum error number (distortions) that takes into consideration predefined significance of the image zones is represented as an example. The way to use subclass of Goppa codes perfect in weighted Hamming metric when Goppa polynomials are used as a secret key is presented as well. The additional effect of such encrypted compression methods is auto-watermarking of the resulting image.
The application of Support Vector Machine (SVM) over data stream is growing with the increasing real-time processing requirements in classification field, like anomaly detection and real-time imageprocessing. However...
详细信息
ISBN:
(纸本)9781538637913
The application of Support Vector Machine (SVM) over data stream is growing with the increasing real-time processing requirements in classification field, like anomaly detection and real-time imageprocessing. However, the dynamic live data with high volume and fast arrival rate in data streams make it challenging to apply SVM in data stream processing. Existing SVM implementations are mostly designed for batch processing and hardly satisfy the efficiency requirement of stream processing for its inherent complexity. To address the challenges, we propose a high efficiency distributed SVM framework over data stream (HDSVM), which consists of two main algorithms, incremental learning algorithm and distributed algorithm. Firstly, we propose a partial support vectors reserving incremental learning algorithm (PSVIL). By selecting a subset of support vectors based on their distances to classification hyperplane instead of the universal set to update SVM, the algorithm achieves lower time overhead while ensuring accuracy. Secondly, we propose a distribution remaining partition and fast aggregation distributed algorithm (DRPFA) for SVM. The real-time data is partitioned based on the original distribution with clustering instead of random partition, and historical support vectors are partitioned based on their distances to the classification hyperplane. The global hyperplane can be obtained by averaging the parameters of local hyperplanes due to the above partition strategy. Extensive experiments on Apache Storm show that the proposed HDSVM achieve lower time overhead and similar accuracy compared with the state-of-art. Speed-up ratio is increased by 2-8 times within 1% accuracy deviation.
Models based on local operators can't preserve texture information. Nonlocal models can be used for many imageprocessing tasks. A main advantage of nonlocal models over classical PDE-based algorithms is the abili...
详细信息
Many imageprocessingalgorithms have been parallelized successfully on many-core processors, such as GPU and Intel Xeon Phi. In this paper, we choose the Sunway many-core processor SW26010, which is a new processor d...
详细信息
ISBN:
(纸本)9781509042975
Many imageprocessingalgorithms have been parallelized successfully on many-core processors, such as GPU and Intel Xeon Phi. In this paper, we choose the Sunway many-core processor SW26010, which is a new processor designed and made in China that constitutes the current NO. 1 supercomputer Sunway TaihuLight. This paper firstly introduces the architecture of Sunway SW26010 processor and two representative imageprocessingalgorithms: local binary pattern (LBP) and histogram of oriented gradient (HOG). Furthermore we propose a method of parallel implementation, and the experimental results of this method show that the speedup can be up to 170 for LBP and 33 for HOG. Then two optimized methods are brought forward based on this parallel implementation, including the optimization of program and parallel design. We optimize the program by using the method that combined step transmission and software prefetching. From the experiment results of the first optimization we can know that the maximum speedup can reach 310 for LBP with processing high-resolution images and 83 for HOG. Then we optimize the parallel design by using a coarse grained parallel method, and the experimental results show that the speedup can be up to 370 for LBP and 95 for HOG when processing low-resolution images. Finally, we investigate the scalability of our parallelism on the Sunway TaihuLight with different number of processor nodes, and the experiment results prove that the two algorithms' parallel design and implementation have better expansibility.
暂无评论