In this paper, a vision chip for a contrast-enhanced image based on a structure of a biological retina is introduced. The key advantage of this structure is high speed of signalprocessing. In a conventional active pi...
详细信息
ISBN:
(纸本)9780819466143
In this paper, a vision chip for a contrast-enhanced image based on a structure of a biological retina is introduced. The key advantage of this structure is high speed of signalprocessing. In a conventional active pixel sensor (APS), the charge accumulation time limits its operation speed. In order to enhance the speed, a logarithmic APS was applied to the vision chip. By applying a MOS-type photodetector to the logarithmic APS, we could achieve sufficient output swing for the vision chip in natural illumination condition. In addition, a CMOS buffer circuit, a common drain amplifier, is commonly used for both raw and smoothed images by using additional switches. By using the switch-selective resistive network, the total number of MOSFETs for a unit pixel and the fixed-pattern noise were reduced. A vision chip with a 160x120 pixel array was fabricated using a 0.35 mu m double-poly four-metal CMOS technology, and its operation was experimentally investigated.
Many algorithms have been developed to recognize regions, edges, color, and objects in images and videos. For applications like surveillance or object-based video coding, it is important to segment the foreground obje...
详细信息
A multi-sensor detection and fusion technology is described in this paper. The system consist of inputs from three sensors, Infra Red, Doppler Motion, and Stereo Video. This choice of sensors is designed to give high ...
详细信息
ISBN:
(纸本)9780819466938
A multi-sensor detection and fusion technology is described in this paper. The system consist of inputs from three sensors, Infra Red, Doppler Motion, and Stereo Video. This choice of sensors is designed to give high reliability, infra red and Doppler to provide detection ability at night, stereo video has the ability to analyze depth and range information. The combination of these sensors has the ability to provide high probability of detection and very low false alarm rate. The technique consists of three processing parts corresponding to each sensor data, and a fusion module, which makes the final decision based on the inputs from the three parts. The signalprocessing and detection algorithms process the inputs from each sensors and provides a specific information to the fusion module. Fusion module is based on bayes belief propagation theory. It takes the processed inputs from all the sensor modules and provides a final decision on the presence and absense of objects, as well as their reliability based on the iterative belief propagation algorithm operating on decision graphs. A prototype system was built using the technique to study the feasibility of intrusion detection for NASA's launch danger zone protection. The system verified the potential of the proposed algorithms and proved the feasibility of high probability of detection and low false alarm rates compared to many existing techniques.
One way to save the power consumption in the H.264 decoder is for the H.264 encoder to generate decoder-friendly bit streams. By following this idea, a decoding complexity model of context-based adaptive binary arithm...
详细信息
ISBN:
(纸本)9780819468444
One way to save the power consumption in the H.264 decoder is for the H.264 encoder to generate decoder-friendly bit streams. By following this idea, a decoding complexity model of context-based adaptive binary arithmetic coding (CABAL) for H.264/AVC is investigated in this research. Since different coding modes will have an impact on the number of quantized transformed coefficients (QTCs) and motion vectors (MVs) and, consequently, the complexity of entropy decoding, the encoder with a complexity model can estimate the complexity of entropy decoding and choose the best coding mode to yield the best tradeoff between the rate, distortion and decoding complexity performance. The complexity model consists of two parts: one for source data (i.e. QTCs) and the other for header data (i.e. the macro-block (MB) type and MVs). Thus, the proposed CABAL decoding complexity model of a MB is a function of QTCs and associated MVs, which is verified experimentally. The proposed CABAL decoding complexity model can provide good estimation results for variant bit streams. Practical applications of this complexity model will also be discussed.
The proceedings contain 24 papers. The topics discussed include: recent advances in multiview distributed video coding;super-resolution-based enhancement for real-time ultra-low-bit-rate video coding;signal compressio...
详细信息
ISBN:
(纸本)0819467014
The proceedings contain 24 papers. The topics discussed include: recent advances in multiview distributed video coding;super-resolution-based enhancement for real-time ultra-low-bit-rate video coding;signal compression via logic transforms;a prompt informatoin retrieval system on handheld devices;detecting and isolating malicious nodes in wireless ad hoc networks;switching theory-based steganographic system for JPEG images;new quantization matrices for JPEG steganography;a mesh-based robust digital watermarking technique against geometric attacks;multi-level signature based biometric authentication using watermarking;the problems of using ROC curve as the sole criterion in positive biometrics identification;secure access control to hidden data by biometric features;and a robust digital watermarking scheme by use of integral imaging technique.
Physics studies in fusion devices require statistical analyses of a large number of discharges. Given the complexity of the plasma and the non-linear interactions between the relevant parameters, connecting a physical...
详细信息
ISBN:
(纸本)9781424408290
Physics studies in fusion devices require statistical analyses of a large number of discharges. Given the complexity of the plasma and the non-linear interactions between the relevant parameters, connecting a physical phenomenon with the signal patterns that it generates can be quite demanding. Up to now, data retrieval has been typically accomplished by means of signal name and shot number. The search of the temporal segment to analyze has been carried out in a manual way. Manual searches in databases must be replaced by intelligent techniques to look for data in an automated way. Structural pattern recognition techniques have proven to be very efficient methods to index and retrieve data in JET and TJ-II databases. Waveforms and images can be accessed through several structural pattern recognition applications.
The paper presents new tight frame dyadic limit functions with dense time-frequency grid. The underlying lowpass and band-pass filters possess linear phase. The filterbank has additionally two highpass filters which a...
详细信息
The paper presents new tight frame dyadic limit functions with dense time-frequency grid. The underlying lowpass and band-pass filters possess linear phase. The filterbank has additionally two highpass filters which are identical within one sample shift. This leads to wavelets which approximate shift-invariance. The filters in this paper are FIR and have vanishing moments.
Accurate geo-location of imagery produced from airborne imaging sensors is a prerequisite for precision targeting and navigation. However, the geo-location metadata often has significant errors which can degrade the p...
详细信息
ISBN:
(纸本)9780819466891
Accurate geo-location of imagery produced from airborne imaging sensors is a prerequisite for precision targeting and navigation. However, the geo-location metadata often has significant errors which can degrade the performance of applications using the imagery. When reference imagery is available, image registration can be performed as part of a bundle-adjustment procedure to reduce metadata errors. Knowledge of the metadata error statistics can be used to set the registration transform hypothesis search space size. In setting the search space size, a compromise is often made between computational expediency and search space coverage. It therefore becomes necessary to detect cases in which the true registration solution falls outside of the initial search space. To this end, we develop a registration verification metric, for use in a multisensor image registration algorithm, which measures the verity of the registration solution. The verification metric value is used in a hypothesis testing problem to make a decision regarding the suitability of the search space size. Based on the hypothesis test outcome, we close the loop on the verification metric in an iterative algorithm. We expand the search space as necessary, and re-execute the registration algorithm using the expanded search space. We first provide an overview of the registration algorithm, and then describe the verification metric. We generate numerical results of the verification metric hypothesis testing problem in the form of Receiver Operating Characteristics curves illustrating the accuracy of the approach. We also discuss normalization of the metric across scene content.
Current signal post processing in spectrally encoded frequency domain (FD) optical coherence microscopy (OCM) and optical coherence tomography (OCT) uses Fourier transforms in combination with non-uniform resampling s...
详细信息
ISBN:
(纸本)9780819465566
Current signal post processing in spectrally encoded frequency domain (FD) optical coherence microscopy (OCM) and optical coherence tomography (OCT) uses Fourier transforms in combination with non-uniform resampling strategies to map the k-space data acquired by the spectrometer to spatial domain signals which are necessary for tomogram generation. We propose to use a filter bank (FB) framework for the remapping process. With our new approach, the spectrometer is modeled as a critically sampled analysis FB, whose outputs are quantized subband signals that constitute the k-space spectroscopic data. The optimal procedure to map this data to the spatial domain is via a suitably designed synthesis FB which has low complexity. FB theory additionally states that 1) it is possible to find a synthesis FB such that the overall system has the perfect reconstruction (PR) property;2) any processing on critically sampled subband signals (as done in current schemes) results in aliasing artifacts. These perspectives are evaluated both theoretically and experimentally. We determine the analysis FB corresponding to our FD-OCM system by using a tunable laser and show that for our grating-based spectrometer - employing a CCD-line camera - the non-uniform resampling together with FFT indeed causes aliasing terms and depth dependent signal attenuation. Furthermore, we compute a finite impulse response based synthesis FB and assess the desired PR property by means of layered samples. The resulting images exhibit higher resolution and improved SNR compared to the common FFT-based approach. The potential of the proposed FB approach opens a new perspective also for other spectroscopic applications.
The correlator is the key signalprocessing equipment of a Very Lone Baseline Interferometry (VLBI) synthetic aperture telescope. It receives the mass data collected by the VLBI observatories and produces the visibili...
详细信息
ISBN:
(纸本)9780819469519
The correlator is the key signalprocessing equipment of a Very Lone Baseline Interferometry (VLBI) synthetic aperture telescope. It receives the mass data collected by the VLBI observatories and produces the visibility function of the target, which can be used to spacecraft position, baseline length measurement, synthesis imaging, and other scientific applications. VLBI data correlation is a task of data intensive and computation intensive. This paper presents the algorithms of two parallel software correlators under multiprocessor environments. A near real-time correlator for spacecraft tracking adopts the pipelining and thread-parallel technology, and runs on the SW (Symmetric Multiple Processor) servers. Another high speed prototype correlator using the mixed Pthreads and MPI (Massage Passing Interface) parallel algorithm is realized on a small Beowulf cluster platform. Both correlators have the characteristic of flexible structure, scalability, and with 10-station data correlating abilities.
暂无评论