Optical imageprocessingmethods are important for creation and development of an automated measuring information optical system. statisticalmethods provide additional information, a better understanding of the objec...
详细信息
ISBN:
(数字)9781728144115
ISBN:
(纸本)9781728144122
Optical imageprocessingmethods are important for creation and development of an automated measuring information optical system. statisticalmethods provide additional information, a better understanding of the object that is being studied. They allow describing main features of the image by several parameters, such as mean value, median and standard deviation. For correlation of the computer simulation and experimental data the method of signal quantization is described. It is defined the parameters of the fitted normal distribution for the optical image variation curve and consent criteria of empirical and theoretical data by Romanovsky method. Obtained results can be used as the preliminary analysis for digital processing of the optical image with application in compression and encryption, holography, identification and tracking.
This paper proposes a novel image segmentation method based on luminance distribution and its application to image enhancement. Many existing image segmentation methods focus on semantic segmentation which separates a...
详细信息
ISBN:
(纸本)9789881476852
This paper proposes a novel image segmentation method based on luminance distribution and its application to image enhancement. Many existing image segmentation methods focus on semantic segmentation which separates an image into some meaningful areas. However, those segmentation methods are not effective for image enhancement. The proposed segmentation method separates an image into some areas according to luminance values of pixels. To obtain those areas, the proposed method utilizes a clustering algorithm based on a Gaussian mixture model which is fit by using a variational Bayesian approach. By using the proposed segmentation method, an automatic exposure compensation method is also proposed. The proposed exposure compensation method enables to automatically produce pseudo multi-exposure images from a single image and to improve the image quality by fusing them. Experimental results show that the proposed segmentation method is effective for image enhancement. In addition, the image enhancement method using the proposed segmentation method outperforms state-of-the-art contrast enhancement methods, in terms of the entropy and statistical naturalness.
The blind deblurring algorithm aims to restore the blur kernel and sharp image from a degraded image with blurry and noisy artifacts. In this paper, we propose a novel adaptive patch prior model based on local statist...
详细信息
ISBN:
(纸本)9781479970612
The blind deblurring algorithm aims to restore the blur kernel and sharp image from a degraded image with blurry and noisy artifacts. In this paper, we propose a novel adaptive patch prior model based on local statistics as a constraint term for blur kernel recovery. With this prior, our approach can rebuild the step edge of a patch and enhance low-level features (edges, corners and junctions) by strengthening the guidance to help sharpen edges and texture structures for latent image restoration. Note that our prior is a nonparametric model that does not rely on external statisticalimage knowledge and only depends on internal patch information for adaptive computation. Moreover, our proposed prior has the ability to alleviate noise and oversharpening artifacts caused by heuristic methods. Experiments on two benchmark datasets and a natural image showed that our approach compares favorably with other state-of-the-art methods for kernel estimation.
Spatial image and optical flow provide complementary information for video representation and classification. Traditional methods separately encode two stream signals and then fuse them at the end of streams. This pap...
详细信息
ISBN:
(数字)9781728132488
ISBN:
(纸本)9781728132495
Spatial image and optical flow provide complementary information for video representation and classification. Traditional methods separately encode two stream signals and then fuse them at the end of streams. This paper presents a new multi-stream recurrent neural network where streams are tightly coupled at each time step. Importantly, we propose a stochastic fusion mechanism for multiple streams of video data based on the Gumbel samples to increase the prediction power. A stochastic backpropagation algorithm is implemented to carry out a multi-stream neural network with stochastic fusion based on a joint optimization of convolutional encoder and recurrent decoder. Experiments on UCF101 dataset illustrate the merits of the proposed stochastic fusion in recurrent neural network in terms of interpretation and classification performance.
Due to the broad use of deep learning methods in Bioimaging, it seems convenient to create a framework that facilitates the task of analysing different models and selecting the best one to solve each particular proble...
详细信息
ISBN:
(纸本)9783319996080
Due to the broad use of deep learning methods in Bioimaging, it seems convenient to create a framework that facilitates the task of analysing different models and selecting the best one to solve each particular problem. In this work-in-progress, we are developing a Python framework to deal with such a task in the case of bioimage classification. Namely, the purpose of the framework is to automate and facilitate the process of choosing the best combination of feature extractors (obtained from transfer learning and other techniques), and classification models. The features and models to test are fixed by a simple configuration file to facilitate the use of the framework by non-expert users. The best model is automatically selected through a statistical study, and then it can be employed to predict the category of new images.
Most of the histogram equalization (HE) based image contrast enhancement methods are having a common problem that they over enhanced the highly frequent grey levels whereas less frequent grey levels are comparatively ...
详细信息
ISBN:
(纸本)9781450364027
Most of the histogram equalization (HE) based image contrast enhancement methods are having a common problem that they over enhanced the highly frequent grey levels whereas less frequent grey levels are comparatively less enhanced. The reason behind this is that in most of the contrast enhancement methods (based on HE) the histogram transformation function is directly proportional to the frequency of occurrence of grey level in the image. This motivates us to design a tool to solve this problem. We propose a histogram modification filter based on moving average to deal with the above-mentioned problem. This filter works as a pre-processing step for most of the conventional HE based methods. Experimental results show that the proposed filter is able to enhance the performance of most of the conventional HE based methods. We have also designed a tool the implements all these methods, this tool is tested for real time video's frame enhancement.
This paper presents a new approach for texture classification generalizing a well-known statistical features combining the fractal analysis by means of fractal dimension (FD) with the selection first and second order ...
详细信息
ISBN:
(纸本)9781728131573
This paper presents a new approach for texture classification generalizing a well-known statistical features combining the fractal analysis by means of fractal dimension (FD) with the selection first and second order statistics features in the spatial and wavelet domain. The objective of our paper is to propose the features extraction using statistical parameters in the spatial domain and in wavelet domain with different wavelets, with and without preprocessing stage for the texture classification using neural networks for pattern recognition and studying the effect of the preprocessing and wavelets in classification accuracy. The extracted features are used as the input of the ANN classifier. The performance of the proposed methods are evaluated by using two classes of Brodatz database textures. Finally, classification assessment measures such as the confusion matrix, ROC curves and accuracy are applied to the proposed methods.
We present a novel sketch-based system for generating digital bas-relief sculptures. All existing computational methods for generating digital bas-reliefs first require the input of a three-dimensional (3D) scene, thu...
详细信息
ISBN:
(数字)9781728192284
ISBN:
(纸本)9781728185361
We present a novel sketch-based system for generating digital bas-relief sculptures. All existing computational methods for generating digital bas-reliefs first require the input of a three-dimensional (3D) scene, thus preventing artists from freely creating or exploring designs when 3D data are not available. Motivated by this limitation, we propose a generative adversarial network (GAN)-based sketch modeling system for generating digital bas-reliefs from freehand user sketches (see Figure 1, 5). The basic tool underpinning the interface is a conditional GAN (cGAN) that digitally learns a functional map from a contour image to a 3D model for any given viewpoint of the corresponding bas-relief model. When using our system for designing bas-reliefs, the user only needs to draw 2D sketch lines without having to designate any additional hints on the lines. The interface returns bas-relief results in interactive time (500 ms per bas-relief on average). We tested the quality and robustness of our approach with extensive and comprehensive experiments. By carefully analyzing the results, we verified that our system can faithfully reconstruct bas-reliefs from a test dataset and can generate completely new reliefs from raw amateur sketches.
No-reference (NR) stereoscopic 3D (S3D) image quality assessment (SIQA) is still challenging due to the poor understanding of how the human visual system (HVS) judges image quality based on binocular vision. In this p...
详细信息
ISBN:
(纸本)9781479970612
No-reference (NR) stereoscopic 3D (S3D) image quality assessment (SIQA) is still challenging due to the poor understanding of how the human visual system (HVS) judges image quality based on binocular vision. In this paper, we propose an efficient opinion-aware NR Stereoscopic Quality predictor based on local contrast statistics combination (SQSC). Specifically, for left and right views, we first extract statistical features of the gradient magnitude (GM) and Laplacian of Gaussian (LoG) responses, describing the image local structures from different perspectives. The HVS is insensitive to low-order statistical redundancies that can be removed by LoG filtering. Hence, the monocular statistical features are then fused to derive the binocular features based on a linear combination model using LoG responses-based weightings. These weightings can efficiently simulate the binocular rivalry (BR) phenomenon. Finally, the binocular features and the subjective scores were jointly employed to construct a learned regression model obtained by the support vector regression (SVR) algorithm. Experimental results on three widely used 3D IQA databases demonstrate the high prediction performance of the proposed method when compared to recent well performing SIQA methods.
Character-level string-to-string transduction is an important component of various NLP tasks. The goal is to map an input string to an output string, where the strings may be of different lengths and have characters t...
详细信息
ISBN:
(纸本)9781948087841
Character-level string-to-string transduction is an important component of various NLP tasks. The goal is to map an input string to an output string, where the strings may be of different lengths and have characters taken from different alphabets. Recent approaches have used sequence-to-sequence models with an attention mechanism to learn which parts of the input string the model should focus on during the generation of the output string. Both soft attention and hard monotonic attention have been used, but hard non-monotonic attention has only been used in other sequence modeling tasks such as image captioning (Xu et al., 2015) and has required a stochastic approximation to compute the gradient. In this work, we introduce an exact, polynomial-time algorithm for marginalizing over the exponential number of non-monotonic alignments between two strings, showing that hard attention models can be viewed as neural reparameterizations of the classical IBM Model 1. We compare soft and hard non-monotonic attention experimentally and find that the exact algorithm significantly improves performance over the stochastic approximation and outperforms soft attention.
暂无评论