Near duplicate image detection needs the matching of a bit altered images to the original image. This will help in the detection of forged images. A great deal of effort has been dedicated to visual applications that ...
详细信息
ISBN:
(纸本)9781509025527
Near duplicate image detection needs the matching of a bit altered images to the original image. This will help in the detection of forged images. A great deal of effort has been dedicated to visual applications that need efficient image similarity metrics and signature. Digital images can be easily edited and manipulated owing to the great functionality of imageprocessing software. This leads to the challenge of matching somewhat altered images to their originals, which is termed as near duplicate image detection. This paper discusses the literature reviewed on the development of several image matching algorithms. This paper encompasses 2 sections. Section 1 is the introduction. Section 2 discusses the literature reviewed on the development of image matching algorithms.
image compression is a widely adopted technique used for effective image storage and transmission over open communication channels in cyber-physical systems. Standard cryptographic algorithms are usually used to reach...
详细信息
image compression is a widely adopted technique used for effective image storage and transmission over open communication channels in cyber-physical systems. Standard cryptographic algorithms are usually used to reach this goal. Therefore, in order to organize effective and secure storage of images it is required to follow two independent and sequential procedures - compression and encryption. In the scenario of interest, it is needed to do compression and encryption transformations in reverse order to restore the original image, i.e. it is necessary to have a so-called “code book” similarly as for encryption and decryption to have a secret key. An effective way of combining these procedures for digital images is proposed in this manuscript. This research is mainly focused on the compression methods that consider significance of the initial multimedia object (for example image) different parts to increase the quality of resulting (decompressed) image. One of the most effective approaches for this task is to utilize error-correcting codes (ECC) that allow to limit the number of resulting errors (distortion) as well as to ensure the value of resulting compression ratio. Application of such codes enable to distribute errors that are added during the processing procedure according to predefined significance of the initial multimedia object elements. The approach that is based on weighted Hamming metric that makes it possible to guaranty the limitation of maximum error number (distortions) that takes into consideration predefined significance of the image zones is represented as an example. The way to use subclass of Goppa codes perfect in weighted Hamming metric when Goppa polynomials are used as a secret key is presented as well. The additional effect of such encrypted compression methods is auto-watermarking of the resulting image.
Models based on local operators can't preserve texture information. Nonlocal models can be used for many imageprocessing tasks. A main advantage of nonlocal models over classical PDE-based algorithms is the abili...
详细信息
The application of Support Vector Machine (SVM) over data stream is growing with the increasing real-time processing requirements in classification field, like anomaly detection and real-time imageprocessing. However...
详细信息
ISBN:
(纸本)9781538637913
The application of Support Vector Machine (SVM) over data stream is growing with the increasing real-time processing requirements in classification field, like anomaly detection and real-time imageprocessing. However, the dynamic live data with high volume and fast arrival rate in data streams make it challenging to apply SVM in data stream processing. Existing SVM implementations are mostly designed for batch processing and hardly satisfy the efficiency requirement of stream processing for its inherent complexity. To address the challenges, we propose a high efficiency distributed SVM framework over data stream (HDSVM), which consists of two main algorithms, incremental learning algorithm and distributed algorithm. Firstly, we propose a partial support vectors reserving incremental learning algorithm (PSVIL). By selecting a subset of support vectors based on their distances to classification hyperplane instead of the universal set to update SVM, the algorithm achieves lower time overhead while ensuring accuracy. Secondly, we propose a distribution remaining partition and fast aggregation distributed algorithm (DRPFA) for SVM. The real-time data is partitioned based on the original distribution with clustering instead of random partition, and historical support vectors are partitioned based on their distances to the classification hyperplane. The global hyperplane can be obtained by averaging the parameters of local hyperplanes due to the above partition strategy. Extensive experiments on Apache Storm show that the proposed HDSVM achieve lower time overhead and similar accuracy compared with the state-of-art. Speed-up ratio is increased by 2-8 times within 1% accuracy deviation.
Many imageprocessingalgorithms have been parallelized successfully on many-core processors, such as GPU and Intel Xeon Phi. In this paper, we choose the Sunway many-core processor SW26010, which is a new processor d...
详细信息
ISBN:
(纸本)9781509042975
Many imageprocessingalgorithms have been parallelized successfully on many-core processors, such as GPU and Intel Xeon Phi. In this paper, we choose the Sunway many-core processor SW26010, which is a new processor designed and made in China that constitutes the current NO. 1 supercomputer Sunway TaihuLight. This paper firstly introduces the architecture of Sunway SW26010 processor and two representative imageprocessingalgorithms: local binary pattern (LBP) and histogram of oriented gradient (HOG). Furthermore we propose a method of parallel implementation, and the experimental results of this method show that the speedup can be up to 170 for LBP and 33 for HOG. Then two optimized methods are brought forward based on this parallel implementation, including the optimization of program and parallel design. We optimize the program by using the method that combined step transmission and software prefetching. From the experiment results of the first optimization we can know that the maximum speedup can reach 310 for LBP with processing high-resolution images and 83 for HOG. Then we optimize the parallel design by using a coarse grained parallel method, and the experimental results show that the speedup can be up to 370 for LBP and 95 for HOG when processing low-resolution images. Finally, we investigate the scalability of our parallelism on the Sunway TaihuLight with different number of processor nodes, and the experiment results prove that the two algorithms' parallel design and implementation have better expansibility.
The recent advances in light field imaging are changing the way in which visual content is captured, processed and consumed. Storage and delivery systems for light field images rely on efficient compression algorithms...
详细信息
The recent advances in light field imaging are changing the way in which visual content is captured, processed and consumed. Storage and delivery systems for light field images rely on efficient compression algorithms. Such algorithms must additionally take into account the feature-rich rendering for light field content. Therefore, a proper evaluation of visual quality is essential to design and improve coding solutions for light field content. Consequently, the design of subjective tests should also reflect the light field rendering process. This paper aims at presenting and comparing two methodologies to assess the quality of experience in light field imaging. The first methodology uses an interactive approach, allowing subjects to engage with the light field content when assessing it. The second, on the other hand, is completely passive to ensure all the subjects will have the same experience. Advantages and drawbacks of each approach are compared by relying on statistical analysis of results and conclusions are drawn. The obtained results provide useful insights for future design of evaluation techniques for light field content.
When capturing image data over long distances (0.5 km and above), images are often degraded by atmospheric turbulence, especially when imaging paths are close to the ground or in hot environments. These issues manifes...
详细信息
ISBN:
(纸本)9781510600874
When capturing image data over long distances (0.5 km and above), images are often degraded by atmospheric turbulence, especially when imaging paths are close to the ground or in hot environments. These issues manifest as time-varying scintillation and warping effects that decrease the effective resolution of the sensor and reduce actionable intelligence. In recent years, several imageprocessing approaches to turbulence mitigation have shown promise. Each of these algorithms have different computational requirements, usability demands, and degrees of independence from camera sensors. They also produce different degrees of enhancement when applied to turbulent imagery. Additionally, some of these algorithms are applicable to real-time operational scenarios while others may only be suitable for post-processing workflows. EM Photonics has been developing image-processing-based turbulence mitigation technology since 2005 as a part of our ATCOM [1] imageprocessing suite. In this paper we will compare techniques from the literature with our commercially-available real-time GPU accelerated turbulence mitigation software suite, as well as in-house research algorithms. These comparisons will be made using real, experimentally-obtained data for a variety of different conditions, including varying optical hardware, imaging range, subjects, and turbulence conditions. Comparison metrics will include image quality, video latency, computational complexity, and potential for real-time operation.
The application of visual technology to mine robots has become a hot topic in the development of coal mine automatic production. Key techniques of robot control are the feature recognition of sampled videos and the pe...
详细信息
The application of visual technology to mine robots has become a hot topic in the development of coal mine automatic production. Key techniques of robot control are the feature recognition of sampled videos and the perception of complex surroundings. However, it is difficult for features in underground images with dark hue and low target discrimination to be recognized and extracted, especially for reasons of the nonuniform illumination and heavy dust concentration in mines. Hence, an edge detection algorithm based on the Retinex theory and wavelet multiscale product is proposed in this paper for low-light-level mine image feature extraction, which employs a modified multiscale Retinex method to deal with the low frequency subplot after the wavelet decomposition, an improved fuzzy enhancement approach to handle high frequency components, and finally a revised multiscale product edge detection algorithm to obtain the ultima edge image. Compared with a variety of algorithms by detecting edges of both normal illuminated and underground images, experimental results show that with characteristics of high real-time performance and detection accuracy, the proposed algorithm can exactly meet the needs of surrounding environment perception for mine robots, which applies well to image edge detection in low illumination mines. (C) 2016 Optical Society of America
Pedestrian segmentation in infrared images is a difficult problem for the defects of low SNR and inhomogeneous luminance distribution. In this paper, we propose a method which aims to obtain the accurate pedestrian se...
详细信息
ISBN:
(纸本)9781467399616
Pedestrian segmentation in infrared images is a difficult problem for the defects of low SNR and inhomogeneous luminance distribution. In this paper, we propose a method which aims to obtain the accurate pedestrian segmentation through a background prior and boundary weight-based saliency. Background likelihood is firstly calculated as background prior to get an abstract representation for infrared pedestrian. Then, by considering the object-center prior, the object-biased Gaussian model is applied to derive the probability density estimation for pedestrians. Finally, the above two results are integrated with the boundary weight to obtain the final saliency map for infrared image, based on which pedestrians can be easily segmented. Experimental results on real infrared images captured by intelligent transportation systems demonstrate the effectiveness of the proposed approach against the state-of-the-art algorithms.
Surveillance is very essential for the safety of power substation. The detection of whether wearing safety helmets or not for perambulatory workers is the key component of overall intelligent surveillance system in po...
详细信息
ISBN:
(纸本)9781538604915
Surveillance is very essential for the safety of power substation. The detection of whether wearing safety helmets or not for perambulatory workers is the key component of overall intelligent surveillance system in power substation. In this paper, a novel and practical safety helmet detection framework based on computer vision, machine learning and imageprocessing is proposed. In order to ascertain motion objects in power substation, the ViBe background modelling algorithm is employed. Moreover, based on the result of motion objects segmentation, real-time human classification framework C4 is applied to locate pedestrian in power substation accurately and quickly. Finally, according to the result of pedestrian detection, the safety helmet wearing detection is implemented using the head location, the color space transformation and the color feature discrimination. Extensive compelling experimental results in power substation illustrate the efficiency and effectiveness of the proposed framework.
暂无评论