Color images may be encoded by using a gray-scale image compression technique on each of the three color planes. Such an approach, however, does nottake advantage of the correlation existing between the color planes....
详细信息
Color images may be encoded by using a gray-scale image compression technique on each of the three color planes. Such an approach, however, does nottake advantage of the correlation existing between the color planes. In this paper, a new segmentation-based lossless compression method is proposed for color images. the method exploits the correlation existing among the three color planes by treating each pixel as a vector of three components, performing region growing and difference operations using the vectors, and applying a color coordinate transformation. the method performed better than the Joint Photographic Experts Group (JPEG) standard by an average of 3.40 bits/pixel with a database including four natural color images of scenery, four images of burn wounds, and four fractal images, and it outperformed the Joint Bi-Level image experts Group (JBIG) standard by an average of 3.01 bits/pixel. When applied to a database of 20 burn wound images, the 24 bits/pixel images were efficiently compressed to 4.79 bits/pixel, then requiring 4.16 bits/pixel less than JPEG and 5.41 bits/pixel less than JBIG. (C) 2001 spie and IS&t.
Due to practical considerations, electrical fields and other soft field sensing techniques are convenientto use in process tomography applications. However, most of the reconstruction algorithms were first developed ...
详细信息
ISBN:
(纸本)0819438537
Due to practical considerations, electrical fields and other soft field sensing techniques are convenientto use in process tomography applications. However, most of the reconstruction algorithms were first developed for hard sensing fields that approximate soft fields only in the situation of low electrical contrast. A less restrictive reconstruction method can be developed by refining the qualitative images of a directimaging probe, butthe boundary measurements must be sensitive to changes in the distribution of the electrical properties of the flow. this paper addresses the issue and presents a sensitivity analysis of different excitation strategies and their applicability in a reconstruction algorithm such as described above. Results confirm that classical excitation strategies suffer from a major lack of sensitivity and that new ones must be developed, possibly based on the optimization of the excitation profiles or on multisensing techniques. (C) 2001 spie and IS&t.
GLINt (Geo Lightimaging National testbed) is a program to image geo-synchronous;satellites using Fourier telescopy.(1,2,3) the design of the GLINt system requires knowledge of the reflectance properties of the satell...
详细信息
ISBN:
(纸本)0819442046
GLINt (Geo Lightimaging National testbed) is a program to image geo-synchronous;satellites using Fourier telescopy.(1,2,3) the design of the GLINt system requires knowledge of the reflectance properties of the satellites in certain specific wavelength ranges. Calibrated measurements of satellite brightness due to solar illumination can be made with a telescope. this report details such measurements and the data processing necessary to yield curves of normalized satellite return versus phase angle in given wavelength ranges. these measurements can be used to check the accuracy of satellite reflectivity models.
We presentalgorithms for predicting the quality of colour reproductions of natural scenes. the algorithms are based on a three-point concept for image quality: (1) images are carriers of visual information aboutthe ...
详细信息
We presentalgorithms for predicting the quality of colour reproductions of natural scenes. the algorithms are based on a three-point concept for image quality: (1) images are carriers of visual information aboutthe outside world and the objects located in it, (2) images are used by the visual and cognitive systemsto reconstruct and interpretthe outside world, and (3) the quality of an image is the degree of success with which this image can be used by the visual and cognitive systems.
Programmable media processors have been emerging to meetthe continuously increasing computational demand in complex digital media applications, such as HDtV and MPEG-4, at an affordable cost. these media processors p...
详细信息
Programmable media processors have been emerging to meetthe continuously increasing computational demand in complex digital media applications, such as HDtV and MPEG-4, at an affordable cost. these media processors provide the flexibility to implement various image computing algorithms along with high performance, unlike the hardwired approach that has provided high performance for a particular algorithm, but lacks flexibility. However, to achieve high performance on these media processors, a careful and sometimes innovative design of algorithms is essential. In addition, programming techniques, e.g., software pipelining and loop unrolling, are needed to speed up the computations while the data flow can be optimized using a programmable DMA controller. In this paper, we describe an algorithm for two-dimensional convolution, which can be implemented efficiently on many media processors. Implemented on a new media processor called the MAP1000, ittakes 7.9 ms to convolve a 512x512 image with a 7x7 kernel, which is much faster than the previously reported software-based convolution and is comparable with the hardwired implementations. High performance in two-dimensional convolution and other algorithms on the MAP1000 clearly demonstrates the feasibility of software-based solutions in demanding imaging and video applications. (C) 2000 spie and IS&t. [S1017-9909(00)00203-8].
this paper concerns the possibilities of sea bottom imaging and altitude determining of each imaged point. the performances of new side scan sonars which are able to imagethe sea bottom with a high definition and are...
详细信息
ISBN:
(纸本)0819436771
this paper concerns the possibilities of sea bottom imaging and altitude determining of each imaged point. the performances of new side scan sonars which are able to imagethe sea bottom with a high definition and are able to evaluate the relief with the same definition derived from an interferometric multisensor system. the drawbacks concern the precision of the numerical altitude model. One way to improve the measurements precision is to merge all the information issued from the multi-sensors system. this leads to increase the Signal to Noise Ratio (SNR) and the robustness of the used method. the aim of this paper is to clearly demonstrate the ability to derive benefits of all information issued from the three arrays side scan sonar by merging : a. the three phase signals obtained atthe output of the sensors, b. this same set of data after the application of differentprocessing methods, and c, the a priori relief contextual information. the key idea the proposed fusion technique is to exploitthe strength and the weaknesses of each data element in the fusion of process so thatthe global SNR will be improved as well as the robustness to hostile noisy environments.
We describe a modification of the mixture proportion estimation algorithm based on the granulometric mixing theorem. the modified algorithm is applied to the problem of counting differenttypes of white blood cells in...
详细信息
We describe a modification of the mixture proportion estimation algorithm based on the granulometric mixing theorem. the modified algorithm is applied to the problem of counting differenttypes of white blood cells in bone marrow images. In principle, the algorithm can be used to countthe proportion of cells in each class without explicitly segmenting and classifying them. the direct application of the original algorithm does rot converge well for more than two classes. the modified algorithm uses prior statistics to initially segmentthe mixed pattern spectrum and then applies the one-primitive estimation algorithm to each initial component Applying the algorithm to one class at a time results in better convergence. the counts produced by the modified algorithm on six classes of cells-myeloblast, promyelocyte, myelocyte, metamyelocyte, band, and PolyMorphoNuclear (PMN)-are very close to the human expert's numbers;the deviation of the algorithm counts is similar to the deviation of counts produced by human experts. the importanttechnical contributions are thatthe modified algorithm uses prior statistics for each shape class in place elf prior knowledge of the total number of objects in an image, and it allows for more than one primitive from each class. (C) 2000 spie and IS&t. [S1017-9909(00)00602-4].
Increasing numbers of research libraries are turning to the Internet for electronic interlibrary loan and for document delivery to patrons. this has been made possible through the widespread adoption of software such ...
详细信息
Increasing numbers of research libraries are turning to the Internet for electronic interlibrary loan and for document delivery to patrons. this has been made possible through the widespread adoption of software such as Ariel and DocView. Ariel, a product of the Research Libraries Group, converts paper-based documents to monochrome bitmapped images, and delivers them over the Internet. the National Library of Medicine's DocView is primarily designed for library patrons to receive, display and manage documents received from Ariel systems. While libraries and their patrons are beginning to reap the benefits of this new technology, barriers exist, e.g., differences in image file format, that lead to difficulties in the use of library document information. to research how to overcome such barriers, the Communications Engineering Branch of the Lister Hill National Center for Biomedical Communications, an R&D division of NLM, has developed a web site called the DocMorph Server. this is part of an ongoing intramural R&D program in documentimagingthat has spanned many aspects of electronic document conversion and preservation, Internet documenttransmission and document usage. the DocMorph Server web site is designed to fill two roles. First, in a role that will benefit both libraries and their patrons, it allows Internet users to upload scanned image files for conversion to alternative formats, thereby enabling wider delivery and easier usage of library document information. Second, the DocMorph Server provides the design team an active test bed for evaluating the effectiveness and utility of new documentimageprocessingalgorithms and functions, so thatthey may be evaluated for possible inclusion in other imageprocessing software products being developed at NLM or elsewhere. this paper describes the design of the prototype DocMorph Server and the imageprocessing functions being implemented on it.
this paper is an attemptto develop a coherent framework for understanding, modeling, and computing color categories. the main assumption is thatthe structure of color category systems originates from the statistical...
详细信息
this paper is an attemptto develop a coherent framework for understanding, modeling, and computing color categories. the main assumption is thatthe structure of color category systems originates from the statistical structure of the perceived color environment. this environment can be modeled as color statistics of natural images in some perceptual and approximately uniform color space (e.g., the CIELUV color space). the process of color categorization can be modeled as the grouping of the color statistics by clustering algorithms (e.g., K-means). the proposed computational model enable to predictthe location, order, and number of color categories. the model is examined on the basis of K-means clustering analysis of statistics of 630 natural images in the CIELUV color space. In general, the predictions are consistent with Berlin and Kay, and Boynton and Oslon data.
Our laboratory uses image perception studies to optimize the acquisition and processing of image sequences from x-ray fluoroscopy and interventional MRI (iMRI) both of which are used to guide complex minimally invasiv...
详细信息
Our laboratory uses image perception studies to optimize the acquisition and processing of image sequences from x-ray fluoroscopy and interventional MRI (iMRI) both of which are used to guide complex minimally invasive treatments of cancer and vascular disease. Fluoroscopy consists of high frame rate, quantum-limited image sequences. Since it accounts for over half of the diagnostic population x-ray dose, we attemptto reduce dose by optimizing image acquisition and filtering. We quantify image quality using human detection experiments and modeling. Human spatio-temporal processing greatly affects results. For example, spatial noise reduction filtering is significantly more effective on image sequences than on single image frames where it gives relatively little improvement due to the deleterious effect of spatial noise correlation. At CWRU, we use iMRI to guide a radio-frequency probe used for the thermal ablation of cancer. Improving the speed and accuracy of insertion to the target will reduce patient risk and discomfort. We are investigating keyhole imaging whereby one updates only a portion of the Fourier domain at each time step, producing a fast, approximate image sequence. to optimize the very large number of techniques and parameters, we use a perceptual difference model that quantifies the degrading effects introduced by fast MR imaging, including the blurring of interventional devices. Preliminary studies show that a perpendicular frequency encoding direction provides superior image quality in the region of interest compared to other keyhole stripe orientations. together these two applications illustrate thatimage perception studies can impactthe design of medical imagingsystems.
暂无评论