In the modern industrial context, laser processes, such as laser cutting and laser welding, are predominantly monitored and partially controlled in specific areas, such as process abort scenarios or axis actuator move...
详细信息
ISBN:
(纸本)9781510684607;9781510684614
In the modern industrial context, laser processes, such as laser cutting and laser welding, are predominantly monitored and partially controlled in specific areas, such as process abort scenarios or axis actuator movements. Industrial driven interfaces like OPC UA or proprietary bus interface recently allow data acquisition from those control units within certain limits. This data can be augmented with highly accurate scientific data sources. In our proposed setup this is achieved by integrating laser acoustic sensors along with high speed cameras operating in visual and thermal spectrum. The variety of available data sources offers a significant potential for further processing and analysis via artificial intelligence (AI), contributing to deeper process understanding and further development of enhanced control algorithms of laser material machining processes. A post-mortem annotation with quality characteristics such as dross formation, surface roughness, welding depth, porosity, crater formation, etc. deliver all premises to develop and train AI based control models. To link all data sources and annotations a common time management and time normal is required. Its time resolution depends on the fastest cycle time governing a control answer, typically executed in the range of sub milliseconds. A time scale smaller than standard AI algorithms typically deliver complex inference results. Our paper presents an approach to close the time gap by introducing a smart control platform capable of capturing and preprocessing data in real time by utilizing hardware accelerated acquisition algorithms and time management (FPGA-MPSoC). The solution was implemented, transferred to a state of the art welding and cutting setup, and successfully tested. A foundation for an AI controlled laser machining process is set.
As a speciality, radiology produces the highest volume of medical images in clinical establishments compared to other commonly employed imaging modalities like digital pathology, ophthalmic imaging, etc. Archiving thi...
详细信息
As a speciality, radiology produces the highest volume of medical images in clinical establishments compared to other commonly employed imaging modalities like digital pathology, ophthalmic imaging, etc. Archiving this massive quantity of images with large file sizes is a major problem since the costs associated with storing medical images continue to rise with an increase in cost of electronic storage devices. One of the possible solutions is to compress them for effective storage. The prime challenge is that each modality is distinctively characterized by dynamic range and resolution of the signal and its spatial and statistical distribution. Such variations in medical images are different from camera-acquired natural scene images. Thus, conventional natural image compression algorithms such as J2K and JPEG often fail to preserve the clinically relevant details present in medical images. We address this challenge by developing a modality-specific compressor and a modality-agnostic generic decompressor implemented using a deep neural network (DNN) and capable of preserving clinically relevant image information. Architecture of the DNN is obtained through design space exploration (DSE) with the objective to feature the least computational complexity at the highest compression and a target high-quality factor, thereby leading to a low power requirement for computation. The neural compressed bitstream is further compressed using the lossless Huffman encoding to obtain a variable bit length and high-density compression (20 x -400x). Experimental validation is performed on X-ray, CT and MRI. Through quantitative measurement and clinical validation with a radiologist in the loop, we experimentally demonstrate our approach's performance superiority over traditional methods like JPEG and J2K operating at matching compression factors.
Haze and fog are big reasons for road accidents. The haze occurrence in the air lowers the images quality captured by visible camera sensors. Haze brings inconvenience to numerous computer vision applications as it di...
详细信息
Haze and fog are big reasons for road accidents. The haze occurrence in the air lowers the images quality captured by visible camera sensors. Haze brings inconvenience to numerous computer vision applications as it diminishes the scene visibility. Haze removal techniques recuperate the color and scene contrast. These haze removal techniques are extensively utilized in numerous applications like outdoor surveillance, object detection, consumer electronics, etc. Haze removal is commonly performed under the physical degradation model, which requires a solution of an ill-posed inverse issue. Different dehazing algorithms was recently proposed to relieve this difficulty and has acknowledged a great deal of consideration. Dehazing is basically accomplished through four major steps: hazy images acquisition process, estimation process (atmospheric light, transmission map, scattering phenomenon, and visibility or haze level), enhancement process (improved visibility level, reduce haze or noise level), restoration process (restore enhanced image, image reconstruction). This four-step dehazing process makes it possible to provide a step-by-step approach to the complex solution of the ill-posed inverse problem. Our detailed survey and experimental analysis on different dehazing methods that will help readers understand the effectiveness of the individual step of the dehazing process and will facilitate development of advanced dehazing algorithms. The overall objective of this review paper is to explore the various methods for efficiently removing the haze and short comings of the earlier presented techniques used in the revolutionary era of imageprocessing applications.
In some modern digital positron emission tomography (PET) systems, the coincidence pairs are extracted by software coincidence processing (SCP) for reconstruction. The SCP is typically implemented on central processin...
详细信息
In some modern digital positron emission tomography (PET) systems, the coincidence pairs are extracted by software coincidence processing (SCP) for reconstruction. The SCP is typically implemented on central processing units (CPUs) and then it is accelerated by CPU multithreading technology. However, the more detection modules a PET system has, the more CPU threads are used in acquisition and the fewer threads are available in coincidence processing when the number of threads is fixed. This phenomenon results in reduced processing performance of CPU-based SCP, which limits the application of SCP. In this article, we propose low-cost GPU-based real-time SCP (GPU-SCP) methods to solve the limited CPU thread problem. The proposed processing architecture simplifies the management of threads between acquisition and coincidence processing, leading to the decouple of acquisition and coincidence, accelerates the coincidence processing by GPU multiple threads, and finally realizes the online coincidence processing in high-sensitivity digital PET systems with 20 basic detection modules (BDMs). To evaluate the performance of the proposed GPU-based SCP approaches, we adapted them to the home-made PET systems with different architectures. The speedup experimental results show that the proposed sorting-based GPU-SCP achieves up to similar to 15 times average speedup on GTX1070 compared with serial CPU algorithms, which is comparable to the parallel CPU algorithm with 30 threads. Besides, the proposed combination-based GPU-SCP is superior to the sorting-based GPU-SCP for a specific system architecture. Besides, the image quality experiments indicate the reconstruction images processed by GPU-SCPs are almost the same as the ground truth (differences similar to 1% in the image domain).
This research presents an efficient automatic thresholding technique based on Otsu's method that can be used in edge detection algorithms and then applied as a plug-in for real-time imageprocessing devices. The p...
详细信息
This research presents an efficient automatic thresholding technique based on Otsu's method that can be used in edge detection algorithms and then applied as a plug-in for real-time imageprocessing devices. The proposed thresholding technique uses an iterative clustering based method that targets a reduced number of operations. It is well known that the Otsu calculates the global threshold splitting the image into two classes, foreground and background, and choose the threshold that minimizes the interclass variance of the threshold black and white pixels. In this paper, a faster version of Otsu's method is proposed knowing that the only pixels that have to be moved from one class to another class are the ones with values in between the previous two thresholds. This procedure yields the same set of thresholds as the original method but the redundant computation has been removed and, in this way, only few operations are required. The proposed thresholding technique has been implemented in software using C# programing language and in reconfigurable hardware on a Spartan 3E XC3S500E FPGA board using VHDL. The results obtained, presented for different digital images, confirm that the proposed iterative thresholding algorithm and architecture on FPGA can achieve the requirements to be included in real-time imageprocessingsystems.
Robot deburring is an effective method for improving the surface quality of the high-voltage copper contact. The first step of robot deburring is to acquire the burr images. We propose a new burr mathematical model an...
详细信息
Robot deburring is an effective method for improving the surface quality of the high-voltage copper contact. The first step of robot deburring is to acquire the burr images. We propose a new burr mathematical model and build a real burr image dataset for burr image denoising. In order to improve burr image denoising effects of the high-voltage copper contact, this study proposes an online burr image denoising algorithm, that is, block cosparsity overcomplete learning transform algorithm (BCOLTA). The penalty term and the condition number are affected by the burr parameter. The clustering and transform alternate minimisation algorithms are adopted to achieve lower computational cost and better denoising effect. In addition, BCOLTA also has a good adaptibility to inherent noise images, especially in Gaussian noise. Compared with other traditional and deep learning algorithms by no reference and full reference image quality assessment methods, BCOLTA has state-of-the-art denoising effects and computational complexity on dealing with burr images. This research will play an important role in the intelligent manufacturing field.
The mass ranges of meteors, imaged by electro-optical (EO) cameras and backscatter radar receivers, for the most part do not overlap. Typical EO systems detect meteoroid masses down to 10(- 5) kg or roughly magnitude ...
详细信息
The mass ranges of meteors, imaged by electro-optical (EO) cameras and backscatter radar receivers, for the most part do not overlap. Typical EO systems detect meteoroid masses down to 10(- 5) kg or roughly magnitude + 2 meteors when using moderate field of view optics, un-intensified optical components, and meteor entry velocities around 45 km/sec. This is near the high end of the mass range of typical meteor radar observations. Having the same mass meteor measured by different sensor wavelength bands would be a benefit in terms of calibrating mass estimations for both EO and radar. To that end, the University of Western Ontario (UWO) has acquired and deployed a very low light imaging system based on an electron-multiplying CCD camera technology. This embeds a very low noise, per pixel intensifier chip in a cooled camera setup with various options for frame rate, region of interest and binning. The EO system of optics and sensor was optimally configured to collect 32 frames per second in a square field of view 14.7 degrees on a side, achieving a single-frame stellar limiting magnitude of m(G) = + 10.5. The system typically observes meteors of + 6.5. Given this hardware configuration, we successfully met the challenges associated with the development of robust imageprocessingalgorithms, resulting in a new end-to-end processing pipeline now in operation since 2017. A key development in this pipeline has been the first true application of matched filter processing to process the faintest meteors possible in the EMCCD system while also yielding high quality automated metric measurements of meteor focal plane positions. With pairs of EMCCD systems deployed at two sites, triangulation and high accuracy orbits are one of the many products being generated by this system. These measurements will be coupled to observations from the Canadian Meteor Orbit Radar (CMOR) used for meteor plasma characterization and the Canadian Automated Meteor Observatory (CAMO) high resolution
To improve the segmentation performance and anti-noise robustness of existing weighted kernel intuitionistic fuzzy clustering, a robust kernelized total Bregman divergence-based fuzzy local information clustering moti...
详细信息
To improve the segmentation performance and anti-noise robustness of existing weighted kernel intuitionistic fuzzy clustering, a robust kernelized total Bregman divergence-based fuzzy local information clustering motivated by intuitionistic fuzzy information is proposed. In this algorithm, a kernelized total Bregman divergence is extended by a polynomial kernel function, and the corresponding intuitionistic kernelized total Bregman divergence is put forward to measure the difference between intuitionistic fuzzy sets. Then, the weighted local information is introduced into the objective function of intuitionistic fuzzy clustering, and the similarity between the current pixel and its neighborhood pixels is constructed to better describe the influence of neighborhood pixels on the current pixel. Finally, the square root of the deviation between the current pixel and the mean value of its neighboring pixels is used to adjust the local spatial information to improve the robustness to noise or outliers and further enhance the anti-noise ability of the algorithm. Experimental results show that the proposed algorithm has better clustering performance and stronger anti-noise robustness than existing state-of-the-art fuzzy clustering-related algorithms.
In this research, we study the problem of font image skeletonization using an end-to-end deep adversarial network, in contrast with the state-of-the-art methods that use mathematical algorithms. Several studies have b...
详细信息
In this research, we study the problem of font image skeletonization using an end-to-end deep adversarial network, in contrast with the state-of-the-art methods that use mathematical algorithms. Several studies have been concerned with skeletonization, but a few have utilized deep learning. Further, no study has considered generative models based on deep neural networks for font character skeletonization, which are more delicate than natural objects. In this work, we take a step closer to producing realistic synthesized skeletons of font characters. We consider using an end-to-end deep adversarial network, SkelGAN, for font-image skeletonization, in contrast with the state-of-the-art methods that use mathematical algorithms. The proposed skeleton generator is proved superior to all well-known mathematical skeletonization methods in terms of character structure, including delicate strokes, serifs, and even special styles. Experimental results also demonstrate the dominance of our method against the state-of-the-art supervised image-to-image translation method in font character skeletonization task.
Detail enhancement is the key to the display of infrared image. For the infrared image detail enhancement algorithms, it is very important to present a good visual effect for people effectively. A novel algorithm for ...
详细信息
Detail enhancement is the key to the display of infrared image. For the infrared image detail enhancement algorithms, it is very important to present a good visual effect for people effectively. A novel algorithm for detail enhancement of infrared images is proposed in this paper. The method is based on the relativity of Gaussian-adaptive bilateral Filter. The algorithm consists of three steps. The first step is to divide the input image into the base layer and the detail layer by the relativity of Gaussian-adaptive bilateral filter. In the second step, the detail layer is multiplied by the proposed weight coefficient, and the base layer is processed by histogram projection. The third step is to combine the detail layer and the base layer processed in the second step and output it to the 8-bit domain display. Compare with other methods, the new algorithm reduces the running time greatly. The experimental results showed that the proposed algorithm improves the contrast of infrared images effectively. (C) 2021 Optical Society of America under the terms of the OSA Open Access Publishing Agreement
暂无评论