This paper presents a new image enhancement method using histogram equalization called Bi-Histogram Equalization with Non-parametric Modified Technology (BHENMT). Our proposed method consists of three steps: (i) The i...
详细信息
ISBN:
(纸本)9781509044573
This paper presents a new image enhancement method using histogram equalization called Bi-Histogram Equalization with Non-parametric Modified Technology (BHENMT). Our proposed method consists of three steps: (i) The input original histogram is divided into two parts using the Otsu method. (ii) Then the histogram modification technique is used to control over enhancement and maximize entropy. (iii) Two sub images are enhanced by the tradtional histogram equalization method using the corresponding modified histogram respectively and finally are merged into one output enhanced image. The experimental results show that BHENMT is better than other contrast enhancement methods according to subjective evaluation and various image objective evaluation measures, i.e. Entropy, AMBE and PSNR.
In this paper, we propose a new distributed environment for High Performance Computing (HPC) based on mobile agents. It allows us to perform parallel programs execution as distributed one over a flexible grid constitu...
详细信息
ISBN:
(纸本)9783319302980;9783319302966
In this paper, we propose a new distributed environment for High Performance Computing (HPC) based on mobile agents. It allows us to perform parallel programs execution as distributed one over a flexible grid constituted by a cooperative mobile agent team works. The distributed program to be performed is encapsulated on team leader agent which deploys its team workers as Agent Virtual processing Unit (AVPU). Each AVPU is asked to perform its assigned tasks and provides the computational results which make the data and team works tasks management difficult for the team leader agent and that influence the performance computing. In this work we focused on the implementation of the Mobile Provider Agent (MPA) in order to manage the distribution of data and instructions and to ensure a load balancing model. It grants also some interesting mechanisms to manage the others computing challenges thanks to the mobile agents several skills.
A cloud-based encoding pipeline which generates streams for video-on-demand distribution typically processes a wide diversity of content that exhibit varying signal characteristics. To produce the best quality video s...
详细信息
ISBN:
(纸本)9781467399616
A cloud-based encoding pipeline which generates streams for video-on-demand distribution typically processes a wide diversity of content that exhibit varying signal characteristics. To produce the best quality video streams, the system needs to adapt the encoding to each piece of content, in an automated and scalable way. In this paper, we describe two algorithm optimizations for a distributed cloud-based encoding pipeline: (i) per-title complexity analysis for bitrate-resolution selection;and (ii) per-chunk bitrate control for consistent-quality encoding. These improvements result in a number of advantages over a simple "one-size-fits-all" encoding system, including more efficient bandwidth usage and more consistent video quality.
Various methods have been proposed for enhancing the images. Some of those perform well in some specific application areas but most of the techniques suffer from artifacts due to over enhancement. To overcome this pro...
详细信息
ISBN:
(纸本)9781509022397
Various methods have been proposed for enhancing the images. Some of those perform well in some specific application areas but most of the techniques suffer from artifacts due to over enhancement. To overcome this problem, we have introduced a new image enhancement technique namely Bilateral Histogram Equalization with Pre-processing (BHEP) which uses Harmonic mean to divide the histogram of the image. We have performed both qualitative and quantitative measurements for experiments and the results show that BHEP creates less artifacts in several standard images than the existing state-of-the-art image enhancement techniques.
This study presents a novel robust method for Bayesian denoising of parallel magnetic resonance imaging (pMRI) images. For the first time, the authors' proposal applies fields of experts (FoE), a filter-based high...
详细信息
This study presents a novel robust method for Bayesian denoising of parallel magnetic resonance imaging (pMRI) images. For the first time, the authors' proposal applies fields of experts (FoE), a filter-based higher-order Markov random field (MRF), to model the prior of the pMRI image statistics. The noise in pMRI data behaves to be non-central Chi (nc-chi) distributed. In practice, correlation between coils exists, resulting in that nc-. distribution does not hold anymore and the spatially varying noise problem. Thus, preservation of fine textures requires to adapt locally the estimation. Therefore, more precisely, the noise is reduced by using a sliding window scheme. In each window, the likelihood probability function is accurately modelled from corrupted data by using an innovative Gaussian mixture model (GMM). The parameters of GMM are calculated by applying an iterative expectation maximisation approach. With the priors via the learned FoE model and the likelihood function via GMM, a maximum a posteriori (MAP) estimator is formulated. Then, the noise in the each window is filtered by applying an efficient nonlinear quasi-Newton method to explore an optimal solution for the MAP estimator. Finally, experiments have been conducted on both the simulated and real data to compare the proposed model with some state-of-the-art denoising methods. The experimental results demonstrate the robustness and effectiveness of the proposed denoising model.
image diffusion plays a fundamental role for the task of image denoising. Recently proposed trainable nonlinear reaction diffusion (TNRD) model defines a simple but very effective framework for image denoising. Howeve...
详细信息
Indoor-Outdoor scene classification problem have been proposed for almost 20 years and widely applied to general scene classification, image retrieval, imageprocessing and robot application. But there is no consensus...
详细信息
ISBN:
(纸本)9781510845541
Indoor-Outdoor scene classification problem have been proposed for almost 20 years and widely applied to general scene classification, image retrieval, imageprocessing and robot application. But there is no consensus on one particular scene classification technique that can solve the Indoor-Outdoor scene classification problem perfectly. As larger image dataset has been developed and machine learning technology especially deep learning based methods achieve remarkable performance in computer vision, we aim to provide guidance and direction for researchers to tackle the Indoor-Outdoor scene classification problem with more powerful and robust solution through concluding the Indoor-Outdoor scene classification approaches which have been proposed in last 20 years. In this paper, we review the Indoor-Outdoor scene classification including feature extraction, classifier and related dataset. Their advantages and disadvantages are discussed. At last we conclude some challenging problems remain unsolved and propose some potential solutions.
We introduce a novel approach to Maximum A Posteriori inference based on discrete graphical models. By utilizing local Wasserstein distances for coupling assignment measures across edges of the underlying graph, a giv...
详细信息
This paper proposes a parallelization method for a deblocking filter in a high-efficiency video coding (HEVC) decoder based on complexity estimation. A deblocking filter of HEVC is generally considered to be appropria...
详细信息
This paper proposes a parallelization method for a deblocking filter in a high-efficiency video coding (HEVC) decoder based on complexity estimation. A deblocking filter of HEVC is generally considered to be appropriate for data-level parallelism (DLP) because there are no data-level dependencies among adjacent blocks in the horizontal and vertical filtering processes. However, an imbalanced workload can increase the idle time on some of the threads, and thus, the maximum parallel performance cannot be achieved with only DLP. To alleviate this problem, the proposed method estimates the computational complexity by utilizing the coding unit (CU) segment information and the on/off flag of the deblocking filter in advance. Then, the workload is distributed equally across all threads. The experimental results indicate that the proposed method can accelerate the decoding speed by a factor of 3.97, with six threads on top of the sequential deblocking filtering. In addition, the ratio of the maximum elapsed time to ideal elapsed time is reduced to approximately 21 %, as compared to conventional DLP-based parallel deblocking filtering methods that do not implement a complexity estimation method.
Convolutional neural networks have proven to be highly successful in applications such as image classification, object tracking, and many other tasks based on 2D inputs. Recently, researchers have started to apply con...
详细信息
Convolutional neural networks have proven to be highly successful in applications such as image classification, object tracking, and many other tasks based on 2D inputs. Recently, researchers have started to apply convolutional neural networks to video classification, which constitutes a 3D input and requires far larger amounts of memory and much more computation. FFT based methods can reduce the amount of computation, but this generally comes at the cost of an increased memory requirement. On the other hand, the Winograd Minimal Filtering Algorithm (WMFA) can reduce the number of operations required and thus can speed up the computation, without increasing the required memory. This strategy was shown to be successful for 2D neural networks. We implement the algorithm for 3D convolutional neural networks and apply it to a popular 3D convolutional neural network which is used to classify videos and compare it to cuDNN. For our highly optimized implementation of the algorithm, we observe a twofold speedup for most of the 3D convolution layers of our test network compared to the cuDNN version.
暂无评论