The digital image watermarking technology is widely used to protect intellectual property and to authenticate digital contents in the network environment. The aim of the paper is to invoke the improved Laplacian Pyram...
详细信息
This paper studies integrated solid-state Hall effect sensors with processing microelectronics. Many commercialized triaxial magnetometers have iii-V semiconductor sensing elements, application-specific integrated cir...
详细信息
ISBN:
(纸本)9780998878201
This paper studies integrated solid-state Hall effect sensors with processing microelectronics. Many commercialized triaxial magnetometers have iii-V semiconductor sensing elements, application-specific integrated circuits (ASICs) and microcontrollers. processingalgorithms are supported by device- and hardware specific architectures, middleware and software. Integrated Hall effect sensors with InSb sensing elements may ensure sufficient accuracy, measurement range, adequate spatiotemporal resolution, linearity, high bandwidth, and fast response time. Nonlinearities, cross-axis coupling, temperature sensitivity and other performance-degrading phenomena are minimized using ASICs which implement filters, compensators, etc. We examine and solve noise attenuation and error reduction problems by processing the ASICs outputs of integrated Hall effect sensors. Noise and error are minimized by using adaptive filters and information analysis. A dynamic mode estimator supports system reconfiguration. The experimentally substantiated laboratory-proven schemes enhance the existing solutions toward CPS-compliant sensing solutions for complex systems.
NASA Technical Reports Server (Ntrs) 19880014804: Third conference on Artificial Intelligence for Space Applications, Part 2 by NASA Technical Reports Server (Ntrs); published by
NASA Technical Reports Server (Ntrs) 19880014804: Third conference on Artificial Intelligence for Space Applications, Part 2 by NASA Technical Reports Server (Ntrs); published by
With the rapid growth of imageprocessing technologies, objective image Quality Assessment (IQA) is a topic where considerable research effort has been made over the last two decades. IQA algorithms based on image str...
详细信息
Smart/Intelligent video surveillance technology plays the central role in the emerging smart city systems. Most intelligent visual algorithms require large-scale image/video datasets to train classifiers or acquire di...
详细信息
ISBN:
(纸本)9783319545264;9783319545257
Smart/Intelligent video surveillance technology plays the central role in the emerging smart city systems. Most intelligent visual algorithms require large-scale image/video datasets to train classifiers or acquire discriminative features using machine learning. However, most existing datasets are collected from non-surveillance conditions, which have significant differences as compared to the practical surveillance data. As a consequence, many existing intelligent visual algorithms trained on traditional datasets perform not so well in the real world surveillance applications. We believe the lack of high quality surveillance datasets has greatly limited the application of the computer vision algorithms in practical surveillance scenarios. To solve this problem, one large-scale and comprehensive surveillance image and video database and test platform, called Benchmark and Evaluation of Surveillance Task (abbreviated as BEST), is developed in this work. The original images and videos in BEST were all collected from on-using surveillance cameras, and have been carefully selected to cover a wide and balanced range of outdoor surveillance scenarios. Compared with the existing surveillance/non-surveillance datasets, the proposed BEST dataset provides a realistic, extensive and diversified testbed for a more comprehensive performance evaluation. Our experimental results show that, performance of seven pedestrian detection algorithms on BEST is worse than that on the existing datasets. This highlights the difference between non-surveillance data and real surveillance data, which is the major cause of the performance decreases. The dataset is open to the public and can be downloaded at: http://***/best/Data/List/Datasets.
nedestrian detection is highly valued in intelligent surveillance systems. Most existing pedestrian datasets are autonomously collected from non-surveillance videos, which result in significant data differences betwee...
详细信息
ISBN:
(纸本)9783319545264;9783319545257
nedestrian detection is highly valued in intelligent surveillance systems. Most existing pedestrian datasets are autonomously collected from non-surveillance videos, which result in significant data differences between the self-collected data and practical surveillance data. The data differences include: resolution, illumination, view point, and occlusion. Due to the data differences, most existing pedestrian detection algorithms based on traditional datasets can hardly be adopted to surveillance applications directly. To fill the gap, one surveillance pedestrian image dtaset (SPID), in which all the images were collected from the on-using surveillance systems, was constructed and used to evaluate the existing pedestrian detection (PD) methods. The dataset covers various surveillance scenes and pedestrian scales, view points, and illuminations. Four traditional PD algorithms using hand-crafted features and one deep-learning-model based deep PD methods are adopted to evaluate their performance on the SPID and some well-known existing pedestrian datasets, such as INRIA and Caltech. The experimental ROC curves show that: The performance of all these algorithms tested on SPID is worse than that on INRIA dataset and Caltech dataset, which also proves that the data differences between non-surveillance data and real surveillance data will induce the decreasing of PD performance. The main factors include scale, view point, illumination and occlusion. Thus the specific surveillance pedestrian dataset is very necessary. We believe that the release of SPID can stimulate innovative research on the challenging and important surveillance pedestrian detection problem. SPID is available online at: http://***/best/Data/List/Datasets.
Vessel enhancement in two-dimensional angiogram images is an essential pre-requisite step towards the isolation of coronary arteries. Hessian-based filters are the most commonly used vessel enhancement filters; howeve...
详细信息
ISBN:
(纸本)9781538648391
Vessel enhancement in two-dimensional angiogram images is an essential pre-requisite step towards the isolation of coronary arteries. Hessian-based filters are the most commonly used vessel enhancement filters; however, these filters are more sensitive to noise and suppress the bifurcation regions. Suppression of bifurcation regions results in disconnected vessels. In this study, we present a technique that enhances the arteries of the heart in 2D angiograms and also refines the noisy vesselness obtained through Frangi's method by using guided filter which produces more enhanced image that can be used as an effective pre-processing step for binarization of the Frangi vessel response having less discontinuities and joint suppression. The proposed approach makes use of the guided filter which smooths the edges, and at the same time preserves the edges as well for the enhancement of vessels. Following this filter, an Adaptive thresholding is applied to segment the coronary arteries from the angiogram. The proposed method has been tested on real angiography images and the efficiency of the method has been shown qualitatively as well as quantitatively.
High precision center detection of X-markers is required in many applications such as navigation surgery systems and camera calibration. Hough transform is a preferable tool for extracting intersecting lines in an ima...
详细信息
ISBN:
(纸本)9781538679531;9781538679524
High precision center detection of X-markers is required in many applications such as navigation surgery systems and camera calibration. Hough transform is a preferable tool for extracting intersecting lines in an image, which leads to center detection. In this paper, we detect X-marker centers by the sub-pixel precision, using Hough transform. Switching to Hough space helps us to apply processes like thresholding, filtering and weighted averaging on coordinates. The algorithm involves two parameters `Hough Size' and `Filter Size' required to be adjusted for best performance of the algorithm. A dataset of 900 images is used and best performance is achieved by values of 180 and 23 for the above parameters, respectively. Using this setting, 90.8% of the centers are detected successfully by the sub-pixel precision. The average distance between detected centers and reference centers is 0.51 pixels. This suggests that the proposed algorithm has the potential to be utilized for sub-pixel marker detection.
People recognition in digital images has wide applications and challenges. In this article, we present a systematic review of works published in the last decade;based on which, we have identified, implemented and test...
详细信息
ISBN:
(纸本)9783319565385;9783319565378
People recognition in digital images has wide applications and challenges. In this article, we present a systematic review of works published in the last decade;based on which, we have identified, implemented and tested the frequently used and best-assessed algorithms. We have found Histograms of Oriented Gradients (HOG) like feature extraction algorithm;and two classification algorithms, AdaBoost and Support Vector Machine (SVM). The tests were performed on 50 images chosen randomly from Penn-Fudan public database. The accuracy in SVM-HOG combination was 0.96, it is a similar value to a related work;and the detection rate was 0.66 in SVM-HOG combination and 0.72 in Adaboost-HOG combination, they are inferior to related works. We shall discuss possible reasons.
Compression of moving images has opened unprecedented opportunities of transmission and storage of digital video. Extraordinary performance of today's video codecs is a result of tens of years of work on the devel...
详细信息
ISBN:
(纸本)9783319472744;9783319472737
Compression of moving images has opened unprecedented opportunities of transmission and storage of digital video. Extraordinary performance of today's video codecs is a result of tens of years of work on the development of methods of data encoding. This paper is an attempt to show this history of development. It highlights the history of individual algorithms of data encoding as well as the evolution of video compression technologies as a whole. With the development of successive technologies also functionalities of codecs were evolving, which make also the topic of the paper. The paper ends the attempt of authors' forecasting about the future evolution of video compression technologies.
暂无评论