The task of accurate detection of motile objects is of fundamental importance in surveillance systems. Currently, there are myriad of motion detection algorithms available for video surveillance systems, and the most ...
详细信息
ISBN:
(纸本)9781467393393
The task of accurate detection of motile objects is of fundamental importance in surveillance systems. Currently, there are myriad of motion detection algorithms available for video surveillance systems, and the most prominent algorithms are based on Background Detection, Optical Flow and Edge Detection methods. However, every algorithm has some shortcomings. The given paper proposes an algorithm that amalgamates the strengths of Optical Flow method and Edge Detection method to yield a robust foreground detection approach that appreciably overcomes the drawbacks of individual algorithms. Furthermore, the given paper extensively compares the performance of new approach with other foreground detection algorithms in the face of seven challenges typical to foreground detection techniques. The given framework compares the binary mask obtained from each algorithm with the ground truth image to appraise the effectiveness of each algorithm. Empirical results prove that the proposed algorithm expediently surmounts the rigours and difficulties posed by illumination changes (gradual as well as sudden) in the video sequence. Also, the proposed method adeptly hurdles the roadblocks created by camouflaging of foreground with background.
image quality assessment is a very important and challenging task for many imageprocessing applications. The task of quality assessment of an image can be done either with the help of a reference image of the same sc...
详细信息
image quality assessment is a very important and challenging task for many imageprocessing applications. The task of quality assessment of an image can be done either with the help of a reference image of the same scene or blindly without any reference image. No-reference image quality assessment algorithms specific to a particular type of distortion are very popular for different imageprocessing applications. Color quantization is a technique to reduce the number of unique colors in the image, but excessive color quantization can reduce the visual quality of images. In this paper, we propose a no-reference image quality measure specific to quality assessment color quantized images and color quantized images with dither. The results are validated using a subset of the standard TID2013 image quality dataset for validating it in accordance with the human visual system.
Many tracking algorithms applied in medical imageprocessing, such as observing the movement of cells, have a great improvement in accuracy and robustness. However, it is difficult to deal with the large area occlusio...
详细信息
ISBN:
(纸本)9781509016129
Many tracking algorithms applied in medical imageprocessing, such as observing the movement of cells, have a great improvement in accuracy and robustness. However, it is difficult to deal with the large area occlusion and complete occlusion. In this paper, we propose a fast scale adaptive tracking algorithm based on correlation filtering. Except tracking the change of the target scale quickly, our method can also deal with the problem of large area occlusion and the complete disappearance of the target. Compared with the outstanding scale adaptive tracking method, the proposed method demonstrates higher performances in terms of the accuracy of tracking the target and the real-time performance.
This paper presents the implementation of an adaptive contour detection filter and tumor Characterization on field programmable gate array (FPGA) using a combination of hardware and software components. The proposed s...
详细信息
This paper presents the implementation of an adaptive contour detection filter and tumor Characterization on field programmable gate array (FPGA) using a combination of hardware and software components. The proposed system locates contours from a calculation of Gradient in preferred directions while quantifying the importance of the outline with a wise thresholding. The dedicated algorithm is implemented in an FPGA Xilinx Spartan 6, and results are displayed in a vGA monitor. The FPGA offers the necessary performance for imageprocessing and video in real time, while maintaining the flexibility of the system to support an adaptive algorithm. Simulation results and synthesis proposed edge detection processor on FPGA chip demonstrate the efficiency of proposed architecture.
Some of the imageprocessingalgorithms are verycostly in terms of operations and time. To use these algorithmsin real-time environment, optimization and vectorization arenecessary. In this paper, approaches are propo...
详细信息
ISBN:
(纸本)9781467382878
Some of the imageprocessingalgorithms are verycostly in terms of operations and time. To use these algorithmsin real-time environment, optimization and vectorization arenecessary. In this paper, approaches are proposed to optimize, vectorize and how to fit the algorithm in low memory space. Here, optimized anisotropic diffusion based fog removal algorithm isproposed. Fog removal algorithm removes the fog from imageand produces an image having better visibility. This algorithmhas many phases like anisotropic diffusion, histogram stretchingand smoothing. Anisotropic diffusion is an iterative process thattakes nearly 70% of time complexity of the whole algorithm. Here, optimization and vectorization of the anisotropic diffusion is proposed for better performance. However, optimizationtechniques cost some accuracy but that can be neglected forsignificant improvement in performance. For memory constraintenvironment, a method is proposed to process the entire blockof image and maintains the integrity of operations. Resultsconfirm that with our optimization and vectorization approaches, performance is increased up to 90 fps (approximately) for vGAimage on one of the imageprocessing DSP simulator. Even if, system doesn't have vector operations, the proposed optimizationtechniques can be used to achieve better performance (2× faster).
Ultra low delay video transmission is becoming increasingly important. video-based applications with ultra low delay requirements range from teleoperation scenarios such as controlling drones or telesurgery to autonom...
详细信息
ISBN:
(纸本)9781467399623
Ultra low delay video transmission is becoming increasingly important. video-based applications with ultra low delay requirements range from teleoperation scenarios such as controlling drones or telesurgery to autonomous control of dynamic processes using computer vision algorithms applied on real-time video. To evaluate the performance of the video transmission chain in such systems, it is important to be able to precisely measure the glass-to-glass (G2G) delay of the transmitted video. In this paper, we present a low-complexity system that takes a series of pairwise independent measurements of G2G delay and derives performance metrics such as mean delay or minimum delay etc. from the data. The precision is in the sub-millisecond range, mainly limited by the sampling rate of the measurement system. In our implementation, we achieve a G2G measurement precision of 0.5 milliseconds with a sampling rate of 2kHz.
This paper proposes a framework for analyzing video of physical processes as a paradigm of dynamic data-driven application systems (DDDAS). The algorithms were tested on a combustion system under fuel lean and ultra-l...
详细信息
ISBN:
(纸本)9781467386838
This paper proposes a framework for analyzing video of physical processes as a paradigm of dynamic data-driven application systems (DDDAS). The algorithms were tested on a combustion system under fuel lean and ultra-lean conditions. The main challenge here is to develop feature extraction and information compression algorithms with low computational complexity such that they can be applied to real-time analysis of video captured by a high-speed camera. In the proposed method, image frames of the video is compressed into a sequence of image features. Then, these image features are mapped to a sequence of symbols by partitioning of the feature space. Finally, a special class of probabilistic finite state automata (PFSA), called D-Markov machines, are constructed from the symbol strings to extract pertinent features representing the embedded dynamic characteristics of the physical process. This paper compares the performance and efficiency of three image feature extraction algorithms: Histogram of Oriented Gradients, Gabor Wavelets, and Fractal Dimension. The k-means clustering algorithm has been used for feature space partitioning. The proposed algorithm has been validated on experimental data in a laboratory environment combustor with a single fuel-injector.
Next generation radio telescopes, like the Square Kilometre Array, will acquire an unprecedented amount of data for radio astronomy. The development of fast, parallelisable or distributed algorithms for handling such ...
详细信息
ISBN:
(纸本)9781509018918
Next generation radio telescopes, like the Square Kilometre Array, will acquire an unprecedented amount of data for radio astronomy. The development of fast, parallelisable or distributed algorithms for handling such large-scale data sets is of prime importance. Motivated by this, we investigate herein a convex optimisation algorithmic structure, based on primal-dual forward-backward iterations, for solving the radio interferometric imaging problem. It can encompass any convex prior of interest. It allows for the distributed processing of the measured data and introduces further flexibility by employing a probabilistic approach for the selection of the data blocks used at a given iteration. We study the reconstruction performance with respect to the data distribution and we propose the use of nonuniform probabilities for the randomised updates. Our simulations show the feasibility of the randomisation given a limited computing infrastructure as well as important computational advantages when compared to state-of-the-art algorithmic structures.
The depth of a scene is very important in computer vision. One way to get the depth map is by stereo vision. But because of the noise, textureless area of the scene and the occlusion area, stereo matching becomes rath...
详细信息
ISBN:
(纸本)9781467399623
The depth of a scene is very important in computer vision. One way to get the depth map is by stereo vision. But because of the noise, textureless area of the scene and the occlusion area, stereo matching becomes rather challenging. Some very good algorithms have been proposed. In this paper, the mainstream semi-global stereo matching algorithm (SGM) is studied, and a disparity refinement algorithm is proposed. By using SGM, an initial disparity map is computed. Then a better disparity map can be obtained by applying the disparity refinement algorithm. The proposed refinement algorithm is based on a segment-tree and a fast weighted median filter (WMF). Some experiments are done based on the well-known Middlebury dataset. The results show that the proposed algorithm can improve the quality of the disparity map effectively in most cases.
Parallel programming has many benefits that can help developers and researchers to improve the performance of some algorithms to become more efficient in real life. This is especially true for systems involving medica...
详细信息
ISBN:
(纸本)9781509043217
Parallel programming has many benefits that can help developers and researchers to improve the performance of some algorithms to become more efficient in real life. This is especially true for systems involving medical images. image segmentation for volume extraction is a famous segmentation process that takes long time to finish execution. In this paper, we consider a new version of the Fuzzy C-Means (FCM) segmentation algorithm (known as IT2FPCM) and provide a parallel implementation of it that is 12X time faster than the sequential implementation. The considered algorithm is based on Interval Type-2 FCM and combines fuzzy and possibilistic ideas in order to obtain higher accuracy. We conduct our experiments using two different machines and the results show that the improvement gains for both machines 11X and 12X, respectively.
暂无评论