In this paper, an Adaptive Polar based Filtering Method is proposed for image copy-move forgery detection. In order to improve the performance of detection method, the post-processing of the matching results is worth ...
详细信息
ISBN:
(纸本)9781509032051
In this paper, an Adaptive Polar based Filtering Method is proposed for image copy-move forgery detection. In order to improve the performance of detection method, the post-processing of the matching results is worth being focused on. To filter out the redundant pixels from the initially matched pixels, two pixels sets-Symmetrical Matched Pixels set and Unsymmetrical Matched Pixels set, are extracted from the matched pixel pairs;furthermore, the polar distributions of the two sets are calculated respectively. Then, the filtering thresholds can be adaptively calculated according to the polar distribution, thus the redundant pixels can be filtered out accordingly. Finally, some morphological operations are applied to the remained pixels to generate the detected forged regions. Experimental results show that the proposed scheme can achieve much better detection results compared with the existing state-of-the-art copy-move forgery detection methods.
Various methods have been proposed for enhancing the images. Some of those perform well in some specific application areas but most of the techniques suffer from artifacts due to over enhancement. To overcome this pro...
详细信息
Various methods have been proposed for enhancing the images. Some of those perform well in some specific application areas but most of the techniques suffer from artifacts due to over enhancement. To overcome this problem, we have introduced a new image enhancement technique namely Bilateral Histogram Equalization with Pre-processing (BHEP) which uses Harmonic mean to divide the histogram of the image. We have performed both qualitative and quantitative measurements for experiments and the results show that BHEP creates less artifacts in several standard images than the existing state-of-the-art image enhancement techniques.
This paper presents a new image enhancement method using histogram equalization called Bi-Histogram Equalization with Non-parametric Modified Technology (BHENMT). Our proposed method consists of three steps: (i) The i...
详细信息
ISBN:
(纸本)9781509053827
This paper presents a new image enhancement method using histogram equalization called Bi-Histogram Equalization with Non-parametric Modified Technology (BHENMT). Our proposed method consists of three steps: (i) The input original histogram is divided into two parts using the Otsu method. (ii) Then the histogram modification technique is used to control over enhancement and maximize entropy. (iii) Two sub images are enhanced by the traditional histogram equalization method using the corresponding modified histogram respectively and finally are merged into one output enhanced image. The experimental results show that BHENMT is better than other contrast enhancement methods according to subjective evaluation and various image objective evaluation measures, i.e. Entropy, AMBE and PSNR.
A cloud-based encoding pipeline which generates streams for video-on-demand distribution typically processes a wide diversity of content that exhibit varying signal characteristics. To produce the best quality video s...
详细信息
ISBN:
(纸本)9781467399623
A cloud-based encoding pipeline which generates streams for video-on-demand distribution typically processes a wide diversity of content that exhibit varying signal characteristics. To produce the best quality video streams, the system needs to adapt the encoding to each piece of content, in an automated and scalable way. In this paper, we describe two algorithm optimizations for a distributed cloud-based encoding pipeline: (i) per-title complexity analysis for bitrate-resolution selection;and (ii) per-chunk bitrate control for consistent-quality encoding. These improvements result in a number of advantages over a simple "one-size-fits-all" encoding system, including more efficient bandwidth usage and more consistent video quality.
Segmentation of anatomical structures, from modalities like computed tomography (CT), magnetic resonance imaging (MRI) and ultrasound, is a key enabling technology for medical applications such as diagnostics, plannin...
详细信息
Segmentation of anatomical structures, from modalities like computed tomography (CT), magnetic resonance imaging (MRI) and ultrasound, is a key enabling technology for medical applications such as diagnostics, planning and guidance. More efficient implementations are necessary, as most segmentation methods are computationally expensive, and the amount of medical imaging data is growing. The increased programmability of graphic processing units (GPUs) in recent years have enabled their use in several areas. GPUs can solve large data parallel problems at a higher speed than the traditional CPU, while being more affordable and energy efficient than distributed systems. Furthermore, using a GPU enables concurrent visualization and interactive segmentation, where the user can help the algorithm to achieve a satisfactory result. This review investigates the use of GPUs to accelerate medical image segmentation methods. A set of criteria for efficient use of GPUs are defined and each segmentation method is rated accordingly. In addition, references to relevant GPU implementations and insight into GPU optimization are provided and discussed. The review concludes that most segmentation methods may benefit from CPU processing due to the methods' data parallel structure and high thread count. However, factors such as synchronization, branch divergence and memory usage can limit the speedup. (C) 2014 The Authors. Published by Elsevier B.V.
Graph algorithms are challenging to parallelize on manycore architectures due to complex data dependencies and irregular memory access. We consider the well studied problem of coloring the vertices of a graph. In many...
详细信息
ISBN:
(纸本)9781509021413
Graph algorithms are challenging to parallelize on manycore architectures due to complex data dependencies and irregular memory access. We consider the well studied problem of coloring the vertices of a graph. In many applications it is important to compute a coloring with few colors in near-lineartime. In parallel, the optimistic (speculative) coloring method by Gebremedhin and Manne is the preferred approach but it needs to be modified for manycore architectures. We discuss a range of implementation issues for this vertex-based optimistic approach. We also propose a novel edge-based optimistic approach that has more parallelism and is better suited to GPUs. We study the performance empirically on two architectures(Xeon Phi and GPU) and across many data sets (from finite element problems to social networks). Our implementation uses the Kokkos library, so it is portable across platforms. We show that on GPUs, we significantly reduce the number of colors (geometric mean 4X, but up to 48X) as compared to the widely used cuSPARSE library. In addition, our edge-based algorithm is 1.5 times faster on average than cuSPARSE, where it hasspeedups up to 139X on a circuit problem. We also show the effect of the coloring on a conjugate gradient solver using multi-colored Symmetric Gauss-Seidel method as preconditioner, the higher coloring quality found by the proposed methods reduces the overall solve time up to 33% compared to cuSPARSE.
Functional neuroimaging and paralleldistributedprocessing (PDP) theory, both introduced to cognitive science in the 1980s, led to influential research programmes that have proceeded in parallel with little mutual in...
详细信息
Functional neuroimaging and paralleldistributedprocessing (PDP) theory, both introduced to cognitive science in the 1980s, led to influential research programmes that have proceeded in parallel with little mutual influence. The PDP approach advanced specific claims about the nature of neural representations that, perhaps surprisingly, have gone largely untested in functional brain imaging. One reason may be the widespread use of univariate statistical methods for analysing brain imaging data, which typically rely on assumptions that render them unable to detect distributed representations of the kind that PDP predicts. More recent multivariate methods for image analysis may be better suited to detecting such representations. In the current article, we consider why univariate methods have been insufficient to test PDP's representational claims, articulate some of the properties that neural representations ought to have if the PDP view is valid and then survey the recent neuroimaging literature for evidence that neural representations do or do not have these properties. The survey establishes that the PDP view of distributed representations has considerable evidential support. This analysis underscores the importance of understanding how the assumptions underlying methods for analysing functional imaging data constrain the kinds of questions that can be addressed. We then consider the implications for our developing understanding of the neural bases of cognition and for the design of future brain imaging studies.
Integral image computing is an important part of many vision applications and is characterized by intensive computation and frequent memory accessing. This brief proposes an approach for fast integral image computing ...
详细信息
Integral image computing is an important part of many vision applications and is characterized by intensive computation and frequent memory accessing. This brief proposes an approach for fast integral image computing with high area and power efficiency. For the data flow of the integral image computation a dual-direction data-oriented integral image computing mechanism is proposed to improve the processing efficiency, and then a pipelined parallel architecture is designed to support this mechanism. The parallelism and time complexity of the approach are analyzed and the hardware implementation cost of the proposed architecture is also presented. Compared with the state-of-the-art methods this architecture achieves the highest processing speed with comparatively low logic resources and power consumption.
Our country has vast potential to come up as a cardinal exporter of agricultural produce, but lack of quick quality evaluation techniques, huge losses in processing and handling after harvesting, diseased crop etc. re...
详细信息
ISBN:
(纸本)9781509036707
Our country has vast potential to come up as a cardinal exporter of agricultural produce, but lack of quick quality evaluation techniques, huge losses in processing and handling after harvesting, diseased crop etc. result in a lower contribution to global market. The tomato crop is often infected by a disease, where plant's leaves get covered with spots of colors dark brown with purple border and light grey center; termed as Septoria Leaf Spot. It causes the leaves to turn yellow, but most damage occurs due to loss of leaves by infection. In this paper, tomato maturity based on color and fungal infection in the tomato leaves is determined. Initially thresholding algorithm was performed to determine the maturity of tomato. To make the system more generalized and self-adapting a shift to k-means clustering algorithm is made. Finally a comparative analysis of both the methods was done to analyze which method is more suitable in different conditions. Also an unconventional machine vision system has been suggested that scrutinizes the leaves emerging out of the soil and depending upon leaf spots, it analyzes the nature of fungus and its depth into the stem of tomato. k-means algorithm along with thresholding is used for segmentation of image and eventually identifying fungus. The fungus part that is segmented, is then studied to derive the percentage of presence.
暂无评论