Measurements of angular vibration of aerial cameras in a variety of operating conditions are critical to analyze the performance of the vibration isolation system. Instead of using an additional optical system to meas...
详细信息
Measurements of angular vibration of aerial cameras in a variety of operating conditions are critical to analyze the performance of the vibration isolation system. Instead of using an additional optical system to measure the angular motion of the camera, an image-based and easy-to-implement method is proposed for the linear array camera to measure the image motions captured by the camera directly. The natural frequencies of the vibration isolation were also measured by laboratory vibration test. For image vibration measuring, the angular vibration is represented in image motion by the relationship between the image motion and angular motion of the camera. Based on the push-broom imaging principle, the image motion at the edge of the foreground image of the linear object is extracted using imageprocessing technology including image segment and edge detection methods. Then the image motion is analyzed in the time and frequency domains. The proposed method has been successfully demonstrated for the angular vibration measurement by a flight test. The results of the vibration sensors and the position and orientation system of the flight tests are also given to validate the effectiveness and accuracy of the proposed approach. (C) 2021 Optical Society of America
This feature issue of Biomedical Optics Express covered all aspects of translational photoacoustic research. Application areas include screening and diagnosis of diseases, imaging of disease progression and therapeuti...
详细信息
This feature issue of Biomedical Optics Express covered all aspects of translational photoacoustic research. Application areas include screening and diagnosis of diseases, imaging of disease progression and therapeutic response, and image-guided treatment, such as surgery, drug delivery, and photothermal/photodynamic therapy. The feature issue also covers relevant developments in photoacoustic instrumentation, contrast agents, imageprocessing and reconstruction algorithms. (C) 2021 Optical Society of America under the terms of the OSA Open Access Publishing Agreement
The depth-of-field of imaging systems can be enhanced by placing a phase mask in their aperture stop and deconvolving the image. In general, the mask is optimized using a closed-form image quality criterion assuming d...
详细信息
The depth-of-field of imaging systems can be enhanced by placing a phase mask in their aperture stop and deconvolving the image. In general, the mask is optimized using a closed-form image quality criterion assuming deconvolution with a Wiener filter. However, nonlinear deconvolution algorithms may have better performance, and the question remains as to whether a better co-designed system could be obtained from optimization with a criterion based on such algorithms. To investigate this issue, we compare optimization of phase masks with criteria based on the Wiener filter and on a nonlinear algorithm regularized by total variation. We show that the obtained optimal masks are identical, and propose a conjecture to explain this fact. This result is important since it supports the frequent co-design practice consisting of optimizing a system with a closed-form criterion based on linear deconvolution and deconvolving with a nonlinear algorithm. (c) 2021 Optical Society of America under the terms of the OSA Open Access Publishing Agreement
We report experiments conducted in the field in the presence of fog, that were aimed at imaging under poor visibility. By means of intensity modulation at the source and two-dimensional quadrature lock-in detection by...
详细信息
We report experiments conducted in the field in the presence of fog, that were aimed at imaging under poor visibility. By means of intensity modulation at the source and two-dimensional quadrature lock-in detection by software at the receiver, a significant enhancement of the contrast-to-noise ratio was achieved in the imaging of beacons over hectometric distances. Further by illuminating the field of view with a modulated source, the technique helped reveal objects that were earlier obscured due to multiple scattering of light. This method, thus, holds promise of aiding in various forms of navigation under poor visibility due to fog. (C) 2021 Optical Society of America under the terms of the OSA Open Access Publishing Agreement
In this Letter, we present the modeling, design, and characterization of a light sheet-based structured light illumination (SLI) light field microscopy (LFM) system for fast 3D imaging, where a digital micromirror dev...
详细信息
In this Letter, we present the modeling, design, and characterization of a light sheet-based structured light illumination (SLI) light field microscopy (LFM) system for fast 3D imaging, where a digital micromirror device is employed to rapidly generate designed sinusoidal patterns in the imaging field. Specifically, we sequentially obtain uniformly illuminated and structured light field images, followed by post-processing with a new, to the best of our knowledge, algorithm that combines the deconvolution and HiLo algorithms. This enables fast volumetric imaging with improved optical cross-sectioning capability at a speed of 50 volumes per second over an imaging field of 250 x 250 x 80 mu m(3) in the x, y, and z axis, respectively. Mathematical models have been derived to explain the performance enhancement due to suppressed background noises. To verify the results, imaging experiments on fluorescence beads, fern spore, and Drosophila brain samples, have been performed. The results indicate that the light sheet-based SLI-LFM presents a fast 3D imaging solution with substantially improved optical cross-sectioning capability in comparison with a standard light sheet-based LFM. The new light field imaging method may find important applications in the field of biophotonics. (C) 2021 Optical Society of America.
The CMY colour camera is different from the RGB counterpart where the subtractive colours cyan, magenta and yellow are used. The CMY camera performs better than an RGB camera in low light conditions. However, conventi...
详细信息
The CMY colour camera is different from the RGB counterpart where the subtractive colours cyan, magenta and yellow are used. The CMY camera performs better than an RGB camera in low light conditions. However, conventional CMY colour filter technology made of pigments and dyes are limited in performance for next generation image sensors with submicron pixel sizes. This is because the conventional CMY filters cannot be fabricated in nanoscale as they use their absorption properties to subtract colours. This paper presents a CMOS compatible nanoscale thick CMY colour mosaic made of Al-TiO2-Al nanorods forming a total number of 0.82 million colour filter pixels with each filter pixel size of 4.4 mu m arranged in a CMYM pattern. The colour mosaic was then integrated on a MT9P031 image sensor to make a CMY camera and colour imaging is demonstrated using a 12 colour Macbeth chart. The developed technology will have applications in astronomy, low exposure time imaging in biology, and photography.
Detail enhancement is the key to the display of infrared image. For the infrared image detail enhancement algorithms, it is very important to present a good visual effect for people effectively. A novel algorithm for ...
详细信息
Detail enhancement is the key to the display of infrared image. For the infrared image detail enhancement algorithms, it is very important to present a good visual effect for people effectively. A novel algorithm for detail enhancement of infrared images is proposed in this paper. The method is based on the relativity of Gaussian-adaptive bilateral Filter. The algorithm consists of three steps. The first step is to divide the input image into the base layer and the detail layer by the relativity of Gaussian-adaptive bilateral filter. In the second step, the detail layer is multiplied by the proposed weight coefficient, and the base layer is processed by histogram projection. The third step is to combine the detail layer and the base layer processed in the second step and output it to the 8-bit domain display. Compare with other methods, the new algorithm reduces the running time greatly. The experimental results showed that the proposed algorithm improves the contrast of infrared images effectively. (C) 2021 Optical Society of America under the terms of the OSA Open Access Publishing Agreement
The traditional top-hat method is a commonly used method that quickly separates targets from a background. It is used for its fast processing speed and wide range of applications on programmable hardware. However, in ...
详细信息
The traditional top-hat method is a commonly used method that quickly separates targets from a background. It is used for its fast processing speed and wide range of applications on programmable hardware. However, in some important fields such as microfluidic control, medicine, aerospace, and optical measurement, the observed targets are often spotted with different sizes. The formation mechanism of multiscale spots varies from each other so that they can not be successfully extracted and classified by the traditional top-hat method. To ensure the integrity of targets with a specific size and suppressed noise, the imaging mechanism of different types of spots are studied, and an improved top-hat method with a gray-scale value-based transform is proposed. Compared with the traditional top-hat method, the proposed algorithm is more effective in completely removing unwanted spots. The calculated results of the simulated and real images verify the effectiveness of the double top-hat method in extracting targets with a specific size. Additionally, the resolution of this method is up to the parameter k, which has been discussed in this paper. Furthermore, a multi-top-hat algorithm is presented to distinguish spots of different sizes, and it could be used for real-time multiscale target detection and tracking, as well as real-time multiscale target detection and tracking. (C) 2019 Optical Society of America
We introduce and verify a single-channel time-division filtering low-light-level (LLL) color night vision system (3LCNV). The imaging scheme, comprising a tunable liquid crystal filter, three-generation GaAsP image in...
详细信息
We introduce and verify a single-channel time-division filtering low-light-level (LLL) color night vision system (3LCNV). The imaging scheme, comprising a tunable liquid crystal filter, three-generation GaAsP image intensifier, and CMOS camera, achieves LLL color imaging and ensures sensitivity. The image enhancement and color reconstruction algorithm flow suitable for LLL night vision combines overexposure-against white balance, color correction matrix (CCM) color correction, and color image denoising to improve color visibility and reduce color difference and image noise. The proposed night vision system extends the minimum working illuminance to 10(-4) lx and achieves natural and clear color LLL imaging, improving night-time observations. (C) 2019 Optical Society of America
Traditional paradigms for imaging rely on the use of a spatial structure, either in the detector (pixels arrays) or in the illumination (patterned light). Removal of the spatial structure in the detector or illuminati...
详细信息
Traditional paradigms for imaging rely on the use of a spatial structure, either in the detector (pixels arrays) or in the illumination (patterned light). Removal of the spatial structure in the detector or illumination, i.e., imaging with just a single-point sensor, would require solving a very strongly ill-posed inverse retrieval problem that to date has not been solved. Here, we demonstrate a data-driven approach in which full 3D information is obtained with just a single-point, single-photon avalanche diode that records the arrival time of photons reflected from a scene that is illuminated with short pulses of light. Imaging with single-point time-of-flight (temporal) data opens new routes in terms of speed, size, and functionality. As an example, we show how the training based on an optical time-of-flight camera enables a compact radio-frequency impulse radio detection and ranging transceiver to provide 3D images. Published by The Optical Society under the terms of the Creative Commons Attribution 4.0 License. Further distribution of this work must maintain attribution to the author(s) and the published article's title, journal citation, and DOI.
暂无评论