To report on the state of the art in obtaining high-resolution 3D data of the microanatomy of the temporal bone and to process that data for integration into a surgical simulator. Specifically, we report on our experi...
详细信息
To report on the state of the art in obtaining high-resolution 3D data of the microanatomy of the temporal bone and to process that data for integration into a surgical simulator. Specifically, we report on our experience in this area and discuss the issues involved to further the field. Current temporal bone image acquisition and imageprocessing established in the literature as well as in house methodological development. We reviewed the current English literature for the techniques used in computer-based temporal bone simulation systems to obtain and process anatomical data for use within the simulation. Search terms included "temporal bone simulation, surgical simulation, temporal bone." Articles were chosen and reviewed that directly addressed data acquisition and processing/segmentation and enhancement with emphasis given to computer-based systems. We present the results from this review in relationship to our approach. High-resolution CT imaging ( voxel resolution), along with unique imageprocessing and rendering algorithms, and structure-specific enhancement are needed for high-level training and assessment using temporal bone surgical simulators. Higher-resolution clinical scanning and automated processes that run in efficient time frames are needed before these systems can routinely support pre-surgical planning. Additionally, protocols such as that provided in this manuscript need to be disseminated to increase the number and variety of virtual temporal bones available for training and performance assessment.
We propose an optimization-based power-constrained red-green-blue (RGB)-to-red-green-bluewhite (RGBW) conversion algorithm for emissive RGBW displays. We measure the perceived color distortion using a color difference...
详细信息
We propose an optimization-based power-constrained red-green-blue (RGB)-to-red-green-bluewhite (RGBW) conversion algorithm for emissive RGBW displays. We measure the perceived color distortion using a color difference model in a perceptually uniform color space, and compute the power consumption for displaying an RGBW pixel on an emissive display. The central contribution of this paper is to formulate the optimization problem to minimize the color distortion subject to a constraint on the power consumption. Subsequently, we solve the optimization problem efficiently to convert an image in real time. Furthermore, based on the properties of the human visual system, we extend the proposed algorithm to image-dependent conversion that can preserve spatial detail in an input image. The simulation results show that the proposed algorithm provides a significantly less color distortion than the conventional methods, while providing a graceful tradeoff with the amount of power consumed. Specifically, it is shown that the power consumption can be reduced by up to 20%, while providing about 50% less color distortion than the conventional algorithms. In addition, a subjective evaluation on a real RGBW display is performed, which reveals the merits of the proposed image-dependent conversion for improving the perceptual quality over state-of-the-art techniques.
Background: Microscopic analysis requires that foreground objects of interest, e.g. cells, are in focus. In a typical microscopic specimen, the foreground objects may lie on different depths of field necessitating cap...
详细信息
Background: Microscopic analysis requires that foreground objects of interest, e.g. cells, are in focus. In a typical microscopic specimen, the foreground objects may lie on different depths of field necessitating capture of multiple images taken at different focal planes. The extended depth of field (EDoF) technique is a computational method for merging images from different depths of field into a composite image with all foreground objects in focus. Composite images generated by EDoF can be applied in automated imageprocessing and pattern recognition systems. However, current algorithms for EDoF are computationally intensive and impractical, especially for applications such as medical diagnosis where rapid sample turnaround is important. Since foreground objects typically constitute a minor part of an image, the EDoF technique could be made to work much faster if only foreground regions are processed to make the composite image. We propose a novel algorithm called object-based extended depths of field (OEDoF) to address this issue. Methods: The OEDoF algorithm consists of four major modules: 1) color conversion, 2) object region identification, 3) good contrast pixel identification and 4) detail merging. First, the algorithm employs color conversion to enhance contrast followed by identification of foreground pixels. A composite image is constructed using only these foreground pixels, which dramatically reduces the computational time. Results: We used 250 images obtained from 45 specimens of confirmed malaria infections to test our proposed algorithm. The resulting composite images with all in-focus objects were produced using the proposed OEDoF algorithm. We measured the performance of OEDoF in terms of image clarity (quality) and processing time. The features of interest selected by the OEDoF algorithm are comparable in quality with equivalent regions in images processed by the state-of-the-art complex wavelet EDoF algorithm;however, OEDoF required four times l
The use of reconfigurable computer vision architecture for imageprocessing tasks is an important and challenging application in real time systems with limited resources. It is an emerging field as new computing archi...
详细信息
ISBN:
(纸本)9781450347860
The use of reconfigurable computer vision architecture for imageprocessing tasks is an important and challenging application in real time systems with limited resources. It is an emerging field as new computing architectures are developed, new algorithms are proposed and users define new emerging applications in surveillance. In this paper, a computer vision architecture capable of reconfiguring the processing chain of computer vision algorithms is summarised. The processing chain consists of multiple computer vision tasks, which can be distributed over various computing units. One key characteristic of the designed architecture is graceful degradation, which prevents the system from failure. This system characteristic is achieved by distributing computer vision tasks to other nodes and parametrizing each task depending on the specified quality-of-service. Experiments using an object detector applied to a public dataset are presented.
image compression is an essential technique for saving time and storage space for the gigantic amount of data generated by images. This paper introduces an adaptive source-mapping scheme that greatly improves bit-leve...
详细信息
image compression is an essential technique for saving time and storage space for the gigantic amount of data generated by images. This paper introduces an adaptive source-mapping scheme that greatly improves bit-level lossless grayscale image compression. In the proposed mapping scheme, the frequency of occurrence of each symbol in the original image is computed. According to their corresponding frequencies, these symbols are sorted in descending order. Based on this order, each symbol is replaced by an 8-bit weighted fixed-length code. This replacement will generate an equivalent binary source with an increased length of successive identical symbols (0s or 1s). Different experiments using Lempel-Ziv lossless image compression algorithms have been conducted on the generated binary source. Results show that the newly proposed mapping scheme achieves some dramatic improvements in regards to compression ratios.
Shadow is formed by the interaction of light with object. Effect of shadow is very crucial in the case of satellite imageprocessing. Roads, buildings, trees etc are detected for various applications. But the interfer...
详细信息
ISBN:
(纸本)9781509033492
Shadow is formed by the interaction of light with object. Effect of shadow is very crucial in the case of satellite imageprocessing. Roads, buildings, trees etc are detected for various applications. But the interference of shadow makes mismatching of these objects. Several algorithms are being developed to detect and reconstruct the shadow region. This paper presents a Shadow detection technique based on Niblack segmentation. Niblack segmentation gives better shadow regions compared to Otsu's thresholding method and Sauvola based thresholding. Reconstruction of the shadow region is done by the Bayesian classifier. This classifier generate a training vector and reconstruct non shadow region from shadow region. Posterior probability is determined to reconstruct the non shadow image intensity level. This algorithm is successfully tested with VHSR images.
The main objective of this paper is to provide a comprehensive study on Sparse Representation based feature extraction techniques in the image classification domain. Sparse Representation (SR) plays a vital role in bo...
详细信息
Integral imaging-based cryptographic algorithms provide a new way to design secure and robust image encryption systems. In this paper, we introduce a performance-enhanced image encryption scheme based on depth-convers...
详细信息
Integral imaging-based cryptographic algorithms provide a new way to design secure and robust image encryption systems. In this paper, we introduce a performance-enhanced image encryption scheme based on depth-conversion integral imaging and hybrid cellular automata (CA), aiming to meet the requirements of secure image transmission. First, the input image is decomposed into an elemental image array (EIA) using the depth converted integral imaging technique. The obtained elemental images then are encrypted by utilizing the CA model and chaotic sequence. The conventional computational integral imaging reconstruction (CIIR) technique is a pixel-superposition technique. The resolution of the reconstructed image is dramatically degraded by the large magnification factor in the superposition process as the pickup distance increases. In the proposed reconstruction process, the pixel mapping technique is introduced to solve these problems. A novel property of the proposed scheme is its depth-conversion property, which reconstructs an elemental image originally recorded at long distances from the pinhole array as one that was recorded near the pinhole array and consequently reduces the magnification factor. The results of numerical simulations demonstrate the effectiveness and security of the proposed scheme. (C) 2015 Elsevier B.V. All rights reserved.
This article presents a novel segmentation algorithm that allows the automatic segmentation of masonry blocks from a 3D point cloud acquired with LiDAR technology, for both stationary and mobile devices. The point clo...
详细信息
This article presents a novel segmentation algorithm that allows the automatic segmentation of masonry blocks from a 3D point cloud acquired with LiDAR technology, for both stationary and mobile devices. The point cloud segmentation algorithm is based on a 2.5D approach that creates images based on the intensity attribute of LiDAR systems. imageprocessingalgorithms based on an improvement of the marked-controlled watershed was successfully used to produce the automatic segmentation of the point cloud in the 3D space isolating each individual stone block. Finally, morphologic analysis in two case studies has been carried out. The morphologic analysis provides information about the assemblage of masonry pieces, which is valuable for the structural evaluation of masonry buildings.
Ultra low delay video transmission is becoming increasingly important. Video-based applications with ultra low delay requirements range from teleoperation scenarios such as controlling drones or telesurgery to autonom...
详细信息
ISBN:
(纸本)9781467399616
Ultra low delay video transmission is becoming increasingly important. Video-based applications with ultra low delay requirements range from teleoperation scenarios such as controlling drones or telesurgery to autonomous control of dynamic processes using computer vision algorithms applied on real-time video. To evaluate the performance of the video transmission chain in such systems, it is important to be able to precisely measure the glass-to-glass (G2G) delay of the transmitted video. In this paper, we present a low-complexity system that takes a series of pairwise independent measurements of G2G delay and derives performance metrics such as mean delay or minimum delay etc. from the data. The precision is in the sub-millisecond range, mainly limited by the sampling rate of the measurement system. In our implementation, we achieve a G2G measurement precision of 0.5 milliseconds with a sampling rate of 2kHz.
暂无评论