Smoke recognition is one of the research directions in the field of digital imageprocessing, but common algorithms are mostly based on the video sequence of images. A combination of infrared and visible images is pre...
详细信息
ISBN:
(数字)9783662498316
ISBN:
(纸本)9783662498316;9783662498293
Smoke recognition is one of the research directions in the field of digital imageprocessing, but common algorithms are mostly based on the video sequence of images. A combination of infrared and visible images is presented in this paper, by extracting the analyte infrared image outer contour, and complete comparison of the extent of the visible outline of the image in the same area. Then according to the measured object within the outer contour of the two bands contain the number of pixels ratio, determine the impact of smoggy on the visible image. Experiments show that the algorithm needs to be analyzed only for the infrared and visible band single still image. You can draw judgment of smoggy environment, and it can provide the basis for a fire alarm.
Background: Microscopic analysis requires that foreground objects of interest, e.g. cells, are in focus. In a typical microscopic specimen, the foreground objects may lie on different depths of field necessitating cap...
详细信息
Background: Microscopic analysis requires that foreground objects of interest, e.g. cells, are in focus. In a typical microscopic specimen, the foreground objects may lie on different depths of field necessitating capture of multiple images taken at different focal planes. The extended depth of field (EDoF) technique is a computational method for merging images from different depths of field into a composite image with all foreground objects in focus. Composite images generated by EDoF can be applied in automated imageprocessing and pattern recognition systems. However, current algorithms for EDoF are computationally intensive and impractical, especially for applications such as medical diagnosis where rapid sample turnaround is important. Since foreground objects typically constitute a minor part of an image, the EDoF technique could be made to work much faster if only foreground regions are processed to make the composite image. We propose a novel algorithm called object-based extended depths of field (OEDoF) to address this issue. Methods: The OEDoF algorithm consists of four major modules: 1) color conversion, 2) object region identification, 3) good contrast pixel identification and 4) detail merging. First, the algorithm employs color conversion to enhance contrast followed by identification of foreground pixels. A composite image is constructed using only these foreground pixels, which dramatically reduces the computational time. Results: We used 250 images obtained from 45 specimens of confirmed malaria infections to test our proposed algorithm. The resulting composite images with all in-focus objects were produced using the proposed OEDoF algorithm. We measured the performance of OEDoF in terms of image clarity (quality) and processing time. The features of interest selected by the OEDoF algorithm are comparable in quality with equivalent regions in images processed by the state-of-the-art complex wavelet EDoF algorithm;however, OEDoF required four times l
Textual grounding is an important but challenging task for human-computer interaction, robotics and knowledge mining. Existing algorithms generally formulate the task as selection from a set of bounding box proposals ...
ISBN:
(纸本)9781510860964
Textual grounding is an important but challenging task for human-computer interaction, robotics and knowledge mining. Existing algorithms generally formulate the task as selection from a set of bounding box proposals obtained from deep net based systems. In this work, we demonstrate that we can cast the problem of textual grounding into a unified framework that permits efficient search over all possible bounding boxes. Hence, the method is able to consider significantly more proposals and doesn't rely on a successful first stage hypothesizing bounding box proposals. Beyond, we demonstrate that the trained parameters of our model can be used as word-embeddings which capture spatial-image relationships and provide interpretability. Lastly, at the time of submission, our approach outperformed the current state-of-the-art methods on the Flickr 30k Entities and the ReferltGame dataset by 3.08% and 7.77% respectively.
The accuracy of stereo matching algorithms is one of the key aspects in autonomous driving nowadays. In case of large distances, sub-pixel accurate solutions are required, especially for algorithms in discrete setting...
详细信息
ISBN:
(纸本)9781509018215
The accuracy of stereo matching algorithms is one of the key aspects in autonomous driving nowadays. In case of large distances, sub-pixel accurate solutions are required, especially for algorithms in discrete settings. It has been previously shown that a strong correlation between the matching algorithm and the sub-pixel interpolation method exists, and there are ways to determine it. Unfortunately all methodologies presented so far are laborious and time-consuming. We present here a novel sub-pixel disparity correction technique based on applying histogram matching through the use of generated Look-up Tables (LUTs). Our method is flexible, fast and produces more accurate results than previous solutions in the discrete domain. Although we show the improvements over the Semi-Global matching algorithm, it can be adapted to other matching algorithms that preserve constant misalignment for any kind of 3D scenarios. The proposed method was tested on multiple systems and datasets (Synthetic images, Traffic scenes, Middlebury images, KITTI images) and we show that we can find LUTs that outperform the accuracy of previous solutions on all these sets. The histogram matching procedure lacks in complexity and results indicate a strict dependency of a particular LUT to the underlying stereo matching and the stereo vision system, but not on the image composition.
Global warming induced drastic climate changes have increased the frequency of natural disasters such as flooding, worldwide. Flooding is a constant threat to humanity and reliable systems for flood monitoring and ana...
详细信息
ISBN:
(纸本)9781509006120
Global warming induced drastic climate changes have increased the frequency of natural disasters such as flooding, worldwide. Flooding is a constant threat to humanity and reliable systems for flood monitoring and analysis need to be developed. Flood hazard assessment needs to take into account physical characteristics such as flood depth, flow velocity and the duration of flooding. This paper provides the researchers with a detailed compilation of the methods that can be used for the estimation of flood water depth. A comparative study has been done between the water depth estimation techniques based on imageprocessing and those which does not involve imageprocessing. The comparison is based on various attributes such as implementation methods, advantages, accuracy and cost. imageprocessing methods are classified based on various algorithms such as character recognition, feature extraction, region of interest (ROI), FIR filter etc. Similarly, non-imageprocessing methods are classified based on hardware used such as sensors, level indicators, etc., and other signal based techniques. This study can be used to identify the best method for flood water depth estimation.
The aim of this paper is to develop a system that involves character recognition of Brahmi, Grantha and Vattezuthu characters from palm manuscripts of historical Tamil ancient documents, analyzed the text and machine ...
详细信息
ISBN:
(纸本)9781509009220
The aim of this paper is to develop a system that involves character recognition of Brahmi, Grantha and Vattezuthu characters from palm manuscripts of historical Tamil ancient documents, analyzed the text and machine translated the present Tamil digital text format. Though many researchers have implemented various algorithms and techniques for character recognition in different languages, ancient characters conversion still poses a big challenge. Because image recognition technology has reached near-perfection when it comes to scanning English and other language text. But optical character recognition (OCR) software capable of digitizing printed Tamil text with high levels of accuracy is still elusive. Only a few people are familiar with the ancient characters and make attempts to convert them into written documents manually. The proposed system overcomes such a situation by converting all the ancient historical documents from inscriptions and palm manuscripts into Tamil digital text format. It converts the digital text format using Tamil unicode. Our algorithm comprises different stages: i) image preprocessing, ii) feature extraction, iii) character recognition and iv) digital text conversion. The first phase conversion accuracy of the Brahmi script rate of our algorithm is 91.57% using the neural network and image zoning method. The second phase of the Vattezhuthu character set is to be implemented. Conversion accuracy of Vattezhuthu is 89.75%.
Digital era has produced large volume of images which created many challenges in computer science field to store, retrieve and manage images efficiently and effectively. Many techniques and algorithms have been propos...
详细信息
ISBN:
(纸本)9781509010257
Digital era has produced large volume of images which created many challenges in computer science field to store, retrieve and manage images efficiently and effectively. Many techniques and algorithms have been proposed by different researcher to implement Content Based image Retrieval (CBIR) systems. This paper discusses performance of different CBIR systems implemented using combined features colour, texture and shape as a prominent feature based on wavelet transform. Choice of the feature extraction technique used in image retrieval determines performance of CBIR systems. In this paper evaluation of performance of three CBIR systems based on wavelet decomposition using threshold, wavelet decomposition using morphology operators and wavelet decomposition using Local Binary Patterns (LBP) is done. Also the performance of these methods is compared with the existing methods SIMPLIcity and FIRM. Average precision is used to compare the performance of the implemented systems. Results indicate that performance of CBIR systems using wavelet decomposition give better results than simplicity and FIRM, also wavelet decomposition with Local Binary Patterns (LBP) exhibit better retrieval efficiency compared to wavelet decomposition using threshold and morphological operators. Theses CBIR systems have been tested on bench mark Wang's image database. Precision versus Recall graphs for each system shows the performance of respective systems.
For optical remote sensing images, an effective method to reduce or eliminate the impact of clouds is important. With big data input and real-time processing demands, efficient parallelization strategies are essential...
详细信息
ISBN:
(纸本)9781509028962
For optical remote sensing images, an effective method to reduce or eliminate the impact of clouds is important. With big data input and real-time processing demands, efficient parallelization strategies are essential for high performance computing on multi-core systems. This paper proposes an efficient high performance parallel computing framework for cloud filtering and smoothing. A comparison and benchmarking of two parallel algorithms for cloud filtering that incorporates spatial smoothing solved by two-dimensional dynamic programming is implemented. The experiments were carried out on an NVIDIA GPU accelerator with evaluations of approximation, parallelism and performance. The test results show significant performance improvements with high accuracy compared with sequential CPU implementation, and can be applied to other multi-core systems.
In recent years, development of the computer-aided diagnosis (CAD) systems for the purpose of reducing the false positive on visual screening and improving accuracy of lesion detection has been advanced. Lung cancer i...
详细信息
ISBN:
(纸本)9788993215120
In recent years, development of the computer-aided diagnosis (CAD) systems for the purpose of reducing the false positive on visual screening and improving accuracy of lesion detection has been advanced. Lung cancer is the leading cause of cancer death in the world. Among them, GGO (Ground Glass Opacity) that exhibited early in the before cancer lesion and carcinoma in situ shows a pale concentration, have been concerned about the possibility of undetected on the screening. In this paper, we propose an automatic extraction method of GGO candidate regions from the chest CT image. Our proposed imageprocessingalgorithms is consist of four main steps;1) segmentation of volume of interest from the chest CT image and removing the blood vessel regions, bronchus regions based on 3D line filter, 2) first detection of GGO regions based on density and gradient which is selected the initial GGO candidate regions, 3) identification of the final GGO candidate regions based on DCNN (Deep Convolutional Neural Network) algorithms. Finally, we calculates the statistical features for reducing the false-positive (FP) shadow by the rule-based method, performs identification of the final GGO candidate regions by SVM (Support Vector Machine). Our proposed method performed on to the 31 cases of the LIDC (Lung image Database Consortium) database, and final identification performance of TP: 93.02[%], FP: 128.52[/case] are obtained respectively.
Lossless compression is an important topic in ultraspectral sounder data which includes thousands of spectral channels and it needs to store or transmit data in an efficient form. In this paper, a recursive least squa...
详细信息
ISBN:
(纸本)9781538616062
Lossless compression is an important topic in ultraspectral sounder data which includes thousands of spectral channels and it needs to store or transmit data in an efficient form. In this paper, a recursive least squares (RLS) based prediction method is proposed for the lossless compression of ultraspectral data. Experiments are performed on 10 granule maps which are acquired by NASA's Atmospheric Infrared Sounder (AIRS) system. The experimental results show that the proposed method provides comparable compression ratios to the-state-of-the-art-methods, i.e., ADQPCA and FSQPCA. Given its compression performance and lower complexity, the proposed method can be effectively implemented to embedded systems and it is well suited for onboard processing on satellites.
暂无评论