Histogram equalization is a method of contrast adjustment in imageprocessing using the image’s histogram. However, as modern imaging systems become more complex, these traditional algorithms for histogram equalizati...
详细信息
ISBN:
(数字)9798350370249
ISBN:
(纸本)9798350370270
Histogram equalization is a method of contrast adjustment in imageprocessing using the image’s histogram. However, as modern imaging systems become more complex, these traditional algorithms for histogram equalization are no longer efficient. In response to this problem, researchers have studied several strategies for improving the performance of histogram equalization in digital images. An option is to use parallel processing and multi-threading approaches to distribute the computational burden, thereby speeding up the execution of histogram equalization. Another methodology includes using machine learning algorithms to adapt histogram equalization parameters according to the input image. Furthermore, using advanced hardware architectures like Field Programmable Gate Arrays (FPGA), Graphic processing Units (GPU), or Application Specific Integrated Circuits can significantly enhance the speed and efficiency of a Histogram Equalization. The performance optimization techniques have provided encouraging results, which significantly refine imageprocessing time and visual perception. Modern imaging systems may benefit tremendously from their use in the new age.
C++ is a multi-paradigm language that enables the programmer to set up efficient imageprocessingalgorithms easily. This language strength comes from many aspects. C++ is high-level, so this enables developing powerf...
详细信息
Lane detection based on the visual sensor is of great significance for the environmental perception of the intelligent vehicle. Current mature lane detection algorithms are trained and implemented in good visual condi...
详细信息
ISBN:
(数字)9781665460262
ISBN:
(纸本)9781665460262
Lane detection based on the visual sensor is of great significance for the environmental perception of the intelligent vehicle. Current mature lane detection algorithms are trained and implemented in good visual conditions. However, the low-light environment such as in the night is much more complex, easily causing misdetections and even perception failures, which are harmful to the downstream tasks such as behavior decision and control of ego-vehicle. To tackle this problem, we propose a new lane detection algorithm that introduces the multi-light information into lane detection task. The proposed algorithm adopts a multi-exposure imageprocessing module, which generates and fuses multi-exposure information from the source image data. By integrating this module, mainstream lane detection models can jointly learn the extraction of lane features as well as the enhancement of low-exposed image, thus improving both the performance and robustness of lane detection in the night.
Artificial intelligence (AI) has been a key research area since the 1950s, initially focused on using logic and reasoning to create systems that understand language, control robots, and offer expert advice. With the r...
详细信息
ISBN:
(数字)9798331516147
ISBN:
(纸本)9798331516154
Artificial intelligence (AI) has been a key research area since the 1950s, initially focused on using logic and reasoning to create systems that understand language, control robots, and offer expert advice. With the rise of big data and deep learning, AI has advanced in applications like recommendation systems, image recognition, and machine translation, primarily through optimizing loss functions in deep neural networks to improve accuracy and reduce training *** descent is the core optimization method but faces challenges like slow convergence and local minima. To overcome these, algorithms like Momentum, AdaGrad, RMSProp, Adadelta, Adam, and Nadam have been developed, introducing momentum and adaptive learning rates to accelerate convergence. This paper presents a new optimization algorithm that combines the strengths of Adam and AdaGrad, offering better adaptability to different learning rates.
Providing a perfect presentation becomes a challenging job because of various factors like changing the slides and the correct keys to be used to change the slides while maintaining composure in front of the audience....
详细信息
Compressive sensing (CS) for a full-sized image requires a large sensing matrix, consuming significant storage space and computational resources. Block-based CS addresses this but allocates the same number of samples ...
详细信息
ISBN:
(数字)9798331541460
ISBN:
(纸本)9798331541477
Compressive sensing (CS) for a full-sized image requires a large sensing matrix, consuming significant storage space and computational resources. Block-based CS addresses this but allocates the same number of samples to each block, ignoring the sparsity differences among various blocks. Additionally, the original image's sparsity is unknown in practical systems. Therefore, it is necessary to accurately estimate the sparsity from some initial sampling information. This paper proposes an adaptive rate block CS model based on the difference of the results of two reconstruction algorithms. The image is initially sampled at the sampling end. Then a high-accuracy algorithm and a low-accuracy algorithm are used to reconstruct initial sampled signals respectively. The sparsity of each image block is estimated by the difference between the two initial reconstructed signals, allowing the appropriate allocation of the number of samples for a block. The main computations of this scheme are concentrated at the reconstruction end, which can effectively save resources at the sampling end. Experimental results show that the proposed scheme can effectively improve reconstruction quality while reducing the sampling rate.
Graphic processing units (GPUs) have become a basic accelerator both in high-performance nodes and low-power system-on-chip (SoC). They provide massive data parallelism and very high performance per watt. However, the...
详细信息
Graphic processing units (GPUs) have become a basic accelerator both in high-performance nodes and low-power system-on-chip (SoC). They provide massive data parallelism and very high performance per watt. However, their reliability in harsh environments is an important issue to take into account, especially for safety-critical applications. In this article, we evaluate the influence of the parallelization strategy on the reliability of lower-upper (LU) decomposition on a GPU-accelerated SoC under proton irradiation. Specifically, we compare a memory bound and a compute bound implementation of the decomposition on a K20A GPU embedded on a Tegra K1 (TK1) SoC. We leverage the GPU and CPU clock frequencies both to highlight the radiation sensitivity of the GPU where we are running the benchmark and also to apply both algorithms to solve problems with the same size when exposed to the same radiation dose. Results show that more intensive use of the resources of the GPU increases the cross section. We also observed that most of the radiation-induced errors hang the operating system and even the rebooting process. Finally, we present a preliminary study of the error propagation of the LU decomposition algorithms.
Skin diseases have become more prevalent in recent years, posing a significant burden on healthcare systems globally. These conditions can range from noncancerous problems like acne, eczema, and vitiligo to more sever...
详细信息
A system for determining the distance from the robot to the scene is useful for object tracking, and 3-D reconstructions may be desired for many manufacturing and robotic tasks. While the robot is processing materials...
详细信息
ISBN:
(纸本)9781510667877;9781510667884
A system for determining the distance from the robot to the scene is useful for object tracking, and 3-D reconstructions may be desired for many manufacturing and robotic tasks. While the robot is processing materials, such as welding parts, milling, drilling, etc., fragments of materials fall on the camera installed on the robot, introducing unnecessary information when building a depth map, as well as the emergence of new lost areas, which leads to incorrect determination of the size of objects. There is a problem comprising a decrease in the accuracy of planning the movement trajectory caused by wrong sections on the depth map because of erroneous distance determination to objects. We present an approach combining defect detection and depth reconstruction algorithms. The first step for image defect detection is based on a convolutional auto-encoder (U-Net). The second step is a depth map reconstruction using a spatial reconstruction based on a geometric model with contour and texture analysis. We apply contour restoration and texture synthesis for image reconstruction. A method is proposed for restoring the boundaries of objects in an image based on constructing a composite curve by cubic splines. Our technique outperforms the state-of-the-art methods quantitatively in reconstruction accuracy on the RGB-D benchmark for evaluating manufacturing vision systems.
Machine vision and computer imageprocessing technologies are widely used in the metallurgical industry, especially in recognizing and analyzing defects in glass. High surfaces of planer surface and quality in the gla...
详细信息
ISBN:
(数字)9798331527662
ISBN:
(纸本)9798331527679
Machine vision and computer imageprocessing technologies are widely used in the metallurgical industry, especially in recognizing and analyzing defects in glass. High surfaces of planer surface and quality in the glass and metal manufacturing sector primarily require automated and performance visual detection, and inspection systems and algorithms are constantly improving. This article, therefore, seeks to comprehensively recognize and analyze glass subsurface defects through the integration of imageprocessing technologies and machine vision. According to the algorithm's properties and the image features typified in the process, the defects of identified glass components are primarily classified based on determining the cause and the optical features of characteristics exhibited. Given the nature of varied and detected effects determined mainly by the classifiers, an improved accuracy level was notable, and a reduction of the computational complexity was evident following the use of machine vision and imageprocessing technologies.
暂无评论