This paper develops a coarse-to-fine framework for single-image super-resolution (SR) reconstruction. The coarse-to-fine approach achieves high-quality SR recovery based on the complementary properties of both example...
详细信息
Aiming at the problems that the traditional Harris corner detection algorithm could extract more flase corner points and computational complexity when performing corner extraction on the image, an improved Harris corn...
详细信息
Aiming at the problems that the traditional Harris corner detection algorithm could extract more flase corner points and computational complexity when performing corner extraction on the image, an improved Harris corner detection algorithm is proposed. First, the B-spline function is used to replace a Gaussian window function for smoothing filtering, then the corner points are pre-selected to obtain candidate corners. Finally, in order to improve the adaptability of the algorithm, an auto-adaptive threshold method is used when the non-maximum value is suppressed. Experimental results show that this algorithm improves the detection accuracy and efficiency, and has good corner detection performance.
This paper proposes a non-iterative algorithm on blind image deblurring. This algorithm can restore the degraded images which are blurred by class G. This algorithm is based on that most images spectral amplitude have...
详细信息
ISBN:
(纸本)9781538621592
This paper proposes a non-iterative algorithm on blind image deblurring. This algorithm can restore the degraded images which are blurred by class G. This algorithm is based on that most images spectral amplitude have the similar power law distribution. In accordance with the power law distribution of natural image spectrum, a curve model is proposed to approximate the spectrum of the true image. The OTF(optical transfer function) is estimated by comparing the spectrum of the degraded image with the reconstructed one. Then, the image is restored by employing the estimated OTF and the Wiener filtering. The experiments show that this algorithm obtains a more accurate OTF, and this algorithm can reduce ringing artifacts as compared with some existing algorithms. The quality of the restored images has been enhanced significantly.
processing of digital images is continuously gaining in volume and relevance, with concomitant demands on data storage, transmission and processing power. Encoding the image information in quantum-mechanical systems i...
详细信息
In this paper, we investigate the space-alternating generalized expectationmaximization(SAGE) algorithm with virtual carriers(VCs) in orthogonal frequency division multiplexing(OFDM) systems. The channel frequen...
In this paper, we investigate the space-alternating generalized expectationmaximization(SAGE) algorithm with virtual carriers(VCs) in orthogonal frequency division multiplexing(OFDM) systems. The channel frequency response(CFR) at VCs cannot be estimated accurately due to edge effect after inverse discrete Fourier transform(IDFT). To solve the problem, an improved channel estimation method is introduced to minimize the errors of CFR via iterative technique. Then we apply the SAGE algorithm to estimate the direction of arrival(DOA) and time of arrival(TOA). The SAGE algorithm shows more excellent performance with higher resolution ability than subspace-based algorithms. Based on Monte Carlo trials, we test the performance of the improved method in terms of mean absolute error(MAE) at different signal-to-noise ratios(SNR). Simulation results indicate that the improved method behaves better at any SNR than the conventional DFT-based method.
The primary function of multimedia systems is to seamlessly transform and display content to users while maintaining the perception of acceptable quality. For images and videos, perceptual quality assessment algorithm...
详细信息
The primary function of multimedia systems is to seamlessly transform and display content to users while maintaining the perception of acceptable quality. For images and videos, perceptual quality assessment algorithms play an important role in determining what is acceptable quality and what is unacceptable from a human visual perspective. As modern image quality assessment (IQA) algorithms gain widespread adoption, it is important to achieve a balance between their computational efficiency and their quality prediction accuracy. One way to improve computational performance to meet real-time constraints is to use simplistic models of visual perception, but such an approach has a serious drawback in terms of poor-quality predictions and limited robustness to changing distortions and viewing conditions. In this paper, we investigate the advantages and potential bottlenecks of implementing a best-in-class IQA algorithm, Most Apparent Distortion, on graphics processing units (GPUs). Our results suggest that an understanding of the GPU and CPU architectures, combined with detailed knowledge of the IQA algorithm, can lead to non-trivial speedups without compromising prediction accuracy. A single-GPU and a multi-GPU implementation showed a 24x and a 33x speedup, respectively, over the baseline CPU implementation. A bottleneck analysis revealed the kernels with the highest runtimes, and a microarchitectural analysis illustrated the underlying reasons for the high runtimes of these kernels. Programs written with optimizations such as blocking that map well to CPU memory hierarchies do not map well to the GPU's memory hierarchy. While compute unified device architecture (CUDA) is convenient to use and is powerful in facilitating general purpose GPU (GPGPU) programming, knowledge of how a program interacts with the underlying hardware is essential for understanding performance bottlenecks and resolving them.
Machine learning has revolutionized a number of fields, but many micro-tomography users have never used it for their work. The micro-tomography beamline at the Advanced Light Source (ALS), in collaboration with the Ce...
详细信息
ISBN:
(纸本)9781510612402;9781510612396
Machine learning has revolutionized a number of fields, but many micro-tomography users have never used it for their work. The micro-tomography beamline at the Advanced Light Source (ALS), in collaboration with the Center for Applied Mathematics for Energy Research Applications (CAMERA) at Lawrence Berkeley National Laboratory, has now deployed a series of tools to automate data processing for ALS users using machine learning. This includes new reconstruction algorithms, feature extraction tools, and image classification and recommendation systems for scientific image. Some of these tools are either in automated pipelines that operate on data as it is collected or as stand-alone software. Others are deployed on computing resources at Berkeley Lab-from workstations to supercomputers-and made accessible to users through either scripting or easy-to-use graphical interfaces. This paper presents a progress report on this work.
Plant breeding has significantly improved in recent years, however, phenotyping remains a bottleneck as the process is often expensive, subjective, and laborious. Although commercial phenotyping systems are available,...
详细信息
Plant breeding has significantly improved in recent years, however, phenotyping remains a bottleneck as the process is often expensive, subjective, and laborious. Although commercial phenotyping systems are available, factors like cost, space, and need for specific controlled environment limit the affordability of these commercial phenotyping systems. A low-cost, accurate, and high-throughput phenotyping (HTP) system is highly desirable to plant breeders, physiologists, and agronomists. To solve the problem, an automated system for HTP and imageprocessingalgorithms were developed and tested. The automated platform was an integration of an aluminum framework (including two stepper motors and control components), three cameras, and a laptop. A control program was developed using LabVIEW to manage the operation of platform and sensors together. imageprocessingalgorithms were developed in MATLAB for high-throughput analysis of images acquired by the system for estimating phenotypes/tr aits associated with tested plants. The phenotypes extracted were color, texture, temperature, morphology, and greenness features on a temporal scale. Using two wheat lines known with heat tolerance, the system was validated. Validation studies revealed that features such as green leaf area and green normalized difference vegetation index derived from our system showed differences between control and heat stress treatment, as well as between heat tolerant and susceptible wheat lines. This study demonstrated successful development and implementation of a low-cost automated system with custom algorithms for HTP. Improvement of such systems would further help plant breeders, physiologists, and agronomists to phenotype crops and accelerate plant breeding.
processing of digital images is continuously gaining in volume and relevance, with concomitant demands on data storage, transmission, and processing power. Encoding the image information in quantum-mechanical systems ...
详细信息
processing of digital images is continuously gaining in volume and relevance, with concomitant demands on data storage, transmission, and processing power. Encoding the image information in quantum-mechanical systems instead of classical ones and replacing classical with quantum information processing may alleviate some of these challenges. By encoding and processing the image information in quantum-mechanical systems, we here demonstrate the framework of quantum imageprocessing, where a pure quantum state encodes the image information: we encode the pixel values in the probability amplitudes and the pixel positions in the computational basis states. Our quantum image representation reduces the required number of qubits compared to existing implementations, and we present imageprocessingalgorithms that provide exponential speed-up over their classical counterparts. For the commonly used task of detecting the edge of an image, we propose and implement a quantum algorithm that completes the task with only one single-qubit operation, independent of the size of the image. This demonstrates the potential of quantum imageprocessing for highly efficient image and video processing in the big data era.
In this paper,the distributed optimization problem is investigated under a second-order multi-agent *** the proposed algorithm,each agent solves the optimization via local computation and information exchange with its...
详细信息
ISBN:
(纸本)9781509046584
In this paper,the distributed optimization problem is investigated under a second-order multi-agent *** the proposed algorithm,each agent solves the optimization via local computation and information exchange with its neighbors through the communication ***,in comparison with the existing second-order distributed optimization algorithms,the proposed algorithm is much simpler due to one coupled information exchange among the agents is *** achieve the optimization,the distributed algorithm is proposed based on the consensus method and the gradient *** optimal solution of the problem is thus obtained with the design of Lyapunov function and the help of LaSallel's Invariance Principle.A numerical simulation example and comparison of proposed algorithm with existing works are presented to illustrate the effectiveness of the theoretical result.
暂无评论