This paper proposes a novel framework called concatenated image completion via tensor augmentation and completion (ICTAC), which recovers missing entries of color images with high accuracy. Typical images are second-o...
详细信息
ISBN:
(纸本)9781509009428
This paper proposes a novel framework called concatenated image completion via tensor augmentation and completion (ICTAC), which recovers missing entries of color images with high accuracy. Typical images are second-or third-order tensors (2D/3D) depending if they are grayscale or color, hence tensor completion algorithms are ideal for their recovery. The proposed framework performs image completion by concatenating copies of a single image that has missing entries into a third-order tensor, applying a dimensionality augmentation technique to the tensor, utilizing a tensor completion algorithm for recovering its missing entries, and finally extracting the recovered image from the tensor. The solution relies on two key components that have been recently proposed to take advantage of the tensor train (TT) rank: A tensor augmentation tool called ket augmentation (KA) that represents a low-order tensor by a higher-order tensor, and the algorithm tensor completion by parallel matrix factorization via tensor train (TMac-TT), which has been demonstrated to outperform state-of-the-art tensor completion algorithms. Simulation results for color image recovery show the clear advantage of our framework against current state-of-the-art tensor completion algorithms.
Haze is an atmospheric phenomenon that fogs the visibility of the scenes. Removing the haze has been an important issue in imageprocessing technologies. Many image dehazing technologies with evolution algorithms have...
详细信息
Haze is an atmospheric phenomenon that fogs the visibility of the scenes. Removing the haze has been an important issue in imageprocessing technologies. Many image dehazing technologies with evolution algorithms have been proposed to remove the fog in the image. However, these algorithms usually are compute-intensive. In this paper, we propose a parallel hybrid evolution algorithm based on GPU to enhance the computational performance. In traditional evolution algorithms, the calculation of fitness function occupies the most of the computation time. In the proposed method, we implement this part on GPU by using CUDA framework to reduce the computational load. The experiment results show that the proposed method can remove the haze efficiently and successfully.
In recent years the growth in quantity, diversity and capability of Earth Observation (EO) satellites, has enabled increase's in the achievable payload data dimensionality and volume. However, the lack of equivale...
详细信息
ISBN:
(纸本)9781509018185
In recent years the growth in quantity, diversity and capability of Earth Observation (EO) satellites, has enabled increase's in the achievable payload data dimensionality and volume. However, the lack of equivalent advancement in downlink technology has resulted in the development of an onboard data bottleneck. This bottleneck must be alleviated in order for EO satellites to continue to efficiently provide high quality and increasing quantities of payload data. This research explores the selection and implementation of state-of-the-art multidimensional image compression algorithms and proposes a new onboard data processing architecture, to help alleviate the bottleneck and increase the data throughput of the platform. The proposed new system is based upon a backplane architecture to provide scalability with different satellite platform sizes and varying mission's objectives. The heterogeneous nature of the architecture allows benefits of both Field Programmable Gate Array (FPGA) and Graphical processing Unit (GPU) hardware to be leveraged for maximised data processing throughput.
Face detection is an important step in any face recognition systems, for the purpose of localizing and extracting face region from the rest of the images. The algorithms were built in C/C++ code and collaborated with ...
详细信息
ISBN:
(纸本)9781467392204
Face detection is an important step in any face recognition systems, for the purpose of localizing and extracting face region from the rest of the images. The algorithms were built in C/C++ code and collaborated with OpenCV library. However, face detection system require high performance resources due to heavy real time measurement, in additional, it needs to be run on embedded platform. Therefore, platform system that host the face detection need to be tuned and enhanced to gain better performance from the standard settings and if possible, as good as desktop or laptop performance. In this paper, we propose two methods for enhancing the performance of the embedded platform. The two methods are OpenCV libraries multiprocessing and RAM tuning. The results shows that from one of the boards, Board 1 with quad core ARM Cortex-A7 processor capable to achieve almost as fast as laptop with Intel i3 processor. It can be concluded that embedded platform is capable to be a replacement for conventional computer system.
A signal processing toolbox has been realized to enhance the acoustic signal processing of manufactured electronic systems. The toolbox contains a range of signal processingalgorithms to help better differentiate the...
详细信息
ISBN:
(纸本)9781509014033
A signal processing toolbox has been realized to enhance the acoustic signal processing of manufactured electronic systems. The toolbox contains a range of signal processingalgorithms to help better differentiate the 3D scanned acoustic signals acquired from manufactured parts. The processing has been used to assess the through-life reliability of systems by non-destructively sampling systems during extended reliability tests. The algorithms are based on a variety of novel signal processing methods such as wavelet, time frequency domain imaging, dictionaries and Fast Fourier Transform (FFT) for comparison. The paper will explore how the various algorithms can be tuned to test samples using the designed toolbox. The toolbox has a graphical user interface (GUI) designed in MATLAB. It can control the processing and display of 3D image data “cubes” acquired from an array of A-Scans of a sample obtained by an acoustic microscope. In this case a Sonoscan Gen6 C-SAM acoustic microscope has been used, but if the data format is known the toolbox can be applied to any 3D cube of data. The data sizes can be large, so processing times can be of the order of minutes for 1k × 1k × 1k array. Recently the execution times have been speeded up considerably by employing parallel algorithms and multicore processing. Some routines now only take seconds which is a very significant speed-up.
Vedic multiplier is based on the ancient algorithms (sutras) followed in INDIA for multiplication. This work is based on one of the sutras called “Nikhilam Sutra”. This sutra is meant for faster mental calculation. ...
详细信息
ISBN:
(纸本)9781467378086
Vedic multiplier is based on the ancient algorithms (sutras) followed in INDIA for multiplication. This work is based on one of the sutras called “Nikhilam Sutra”. This sutra is meant for faster mental calculation. Though faster when implemented in hardware, it consumes more power than the conventional ones. This paper presents a technique to modify the architecture of the Vedic multiplier by using some existing methods in order to reduce power and improve imageprocessing applicatio. The 32 × 32 Vedic multiplier is coded in Verilog HDL and Synthesized using Synopsys Design Compiler. The performance is compared in terms of area, data arrival time and power with earlier existing architecture of Vedic multiplier. Filtering involves lots of multiplications which consumes time. Time required increases with the increase in the number of pixels. This paper proposes an approach for image filtering using Vedic Mathematic which performs faster multiplication compared to the conventional algorithms namely Booth and Array Multiplication Algorithm thus reducing the time required for filtering of images. Time required by the algorithms for filtering are then compared using the experimental results.
3D Electrical Capacitance Tomography provides a lot of challenging computational issues that have been reported in the past by many researchers. image reconstruction using deterministic methods requires execution of m...
详细信息
3D Electrical Capacitance Tomography provides a lot of challenging computational issues that have been reported in the past by many researchers. image reconstruction using deterministic methods requires execution of many basic operations of linear algebra. Due to significant sizes of matrices used in ECT for image reconstruction and the fact that best image quality is achieved by using algorithms of which significant part is FEM and which are hard to parallelize or distribute. In order to solve these issues a new set of algorithms had to be developed.
OCR is the most active, interesting evaluation invention of text cum character processing recognition and pattern based image recognition. In present life OCR has been successfully using in finance, legal, banking, he...
详细信息
ISBN:
(纸本)9781467385886
OCR is the most active, interesting evaluation invention of text cum character processing recognition and pattern based image recognition. In present life OCR has been successfully using in finance, legal, banking, health care and home need appliances. The OCR consists the different levels of processing methods like as image Pre Acquisition, Classification, Post-Acquisition, Pre-Level processing, Segmented processing, Post-Level processing, Feature Extraction. The many researchers are proposed various levels of different methodologies and approaches in different versions of languages with help of modern and traditional technologies. This paper expressed the detail study and analysis of various character recognition methods and approaches: in details like as flow and type of approached methodology was used, type of algorithm has built with support of technology has implemented background of the proposed methodology and invention best outcomes flow for the each methodology. This paper and also expressed the main objectives and ideology of various OCR algorithms, like as neural networks algorithm, structural algorithm, support vector algorithm, statistical algorithm, template matching algorithm along with how they classified, identified, rule formed, inferred for recognition of characters and symbols.
MeMoS (Medical Model Sketcher), a software package that provides data required to reconstruct 3D medical models, basing on DICOM and RAW image sets, is presented. The uttermost objective of the software creation was t...
详细信息
ISBN:
(纸本)9781509026616
MeMoS (Medical Model Sketcher), a software package that provides data required to reconstruct 3D medical models, basing on DICOM and RAW image sets, is presented. The uttermost objective of the software creation was to reduce a time needed to perform laborious data extraction process from medical images (necessary for the model reconstruction), without the need of purchasing expensive licenses for already existing programs. Obtained data can be used in any CAD software to recreate the spatial object. Generated 3D models of vascular systems can be used in numerical simulations so as to investigate the physical phenomena occurring in the circulatory system. Additionally, MeMoS is capable of creating datasets for texture analysis - that can be directly fed to the input of texture analysis software. Several results of the possible program output along with preliminary validation of implemented algorithms are outlined as well.
This paper studies single-image depth perception in the wild, i.e., recovering depth from a single image taken in unconstrained settings. We introduce a new dataset "Depth in the Wild" consisting of images i...
详细信息
ISBN:
(纸本)9781510838819
This paper studies single-image depth perception in the wild, i.e., recovering depth from a single image taken in unconstrained settings. We introduce a new dataset "Depth in the Wild" consisting of images in the wild annotated with relative depth between pairs of random points. We also propose a new algorithm that learns to estimate metric depth using annotations of relative depth. Compared to the state of the art, our algorithm is simpler and performs better. Experiments show that our algorithm, combined with existing RGB-D data and our new relative depth annotations, significantly improves single-image depth perception in the wild.
暂无评论