Feature selection, as a preprocessing step to machine learning, plays a pivotal role in removing irrelevant data, reducing dimensionality and improving performance evaluations. Recent years, sparse representation has ...
详细信息
ISBN:
(纸本)9781509055227
Feature selection, as a preprocessing step to machine learning, plays a pivotal role in removing irrelevant data, reducing dimensionality and improving performance evaluations. Recent years, sparse representation has become a useful tool for both supervised and unsupervised feature selection. So far, most of these algorithms still have many problems such as large computation load, performance with poor stability. Thus, this paper proposes a new unsupervised feature selection algorithm via sparse representation (UFSSR), with respect to efficiency and effectiveness. Firstly, this paper reconstructs part of data matrix via sparse representation, which makes the proposed algorithm be robust and independent of domain knowledge. Then, to reduce the reconstruction error, a new feature evaluation function is given to rank all features. Theoretical analysis and experiments compared with many popular algorithms on a set of datasets demonstrate the improvements brought by UFSSR.
image segmentation is a key component in many computer vision systems, and it is recovering a prominent spot in the literature as methods improve and overcome their limitations. The outputs of most recent algorithms a...
详细信息
ISBN:
(纸本)9781467388528
image segmentation is a key component in many computer vision systems, and it is recovering a prominent spot in the literature as methods improve and overcome their limitations. The outputs of most recent algorithms are in the form of a hierarchical segmentation, which provides segmentation at different scales in a single tree-like structure. Commonly, these hierarchical methods start from some low-level features, and are not aware of the scale information of the different regions in them. As such, one might need to work on many different levels of the hierarchy to find the objects in the scene. This work tries to modify the existing hierarchical algorithm by improving their alignment, that is, by trying to modify the depth of the regions in the tree to better couple depth and scale. To do so, we first train a regressor to predict the scale of regions using mid-level features. We then define the anchor slice as the set of regions that better balance between over-segmentation and under-segmentation. The output of our method is an improved hierarchy, re-aligned by the anchor slice. To demonstrate the power of our method, we perform comprehensive experiments, which show that our method, as a post-processing step, can significantly improve the quality of the hierarchical segmentation representations, and ease the usage of hierarchical image segmentation to high-level vision tasks such as object segmentation. We also prove that the improvement generalizes well across different algorithms and datasets, with a low computational cost.
We address the following question: is it possible to reconstruct the geometry of an unknown environment using sparse and incomplete depth measurements? This problem is relevant for a resource-constrained robot that ha...
详细信息
ISBN:
(纸本)9781509037636
We address the following question: is it possible to reconstruct the geometry of an unknown environment using sparse and incomplete depth measurements? This problem is relevant for a resource-constrained robot that has to navigate and map an environment, but does not have enough on-board power or payload to carry a traditional depth sensor (e.g., a 3D lidar) and can only acquire few (point-wise) depth measurements. In general, reconstruction from incomplete data is not possible, but when the robot operates in man-made environments, the depth exhibits some regularity (e.g., many planar surfaces with few edges); we leverage this regularity to infer depth from incomplete measurements. Our formulation bridges robotic perception with the compressive sensing literature in signal processing. We exploit this connection to provide formal results on exact depth recovery in 2D and 3D problems. Taking advantage of our specific sensing modality, we also prove novel and more powerful results to completely characterize the geometry of the signals that we can reconstruct. Our results directly translate to practical algorithms for depth reconstruction; these algorithms are simple (they reduce to solving a linear program), and robust to noise. We test our algorithms on real and simulated data, and show that they enable accurate depth reconstruction from a handful of measurements, and perform well even when the assumption of structured environment is violated.
Many recently proposed graph-processing frameworks utilize powerful computer clusters with dozens of cores to process massive graphs. Their usability and flexibility often come at a cost. We demonstrate that custom so...
详细信息
ISBN:
(纸本)9781467390064
Many recently proposed graph-processing frameworks utilize powerful computer clusters with dozens of cores to process massive graphs. Their usability and flexibility often come at a cost. We demonstrate that custom software written for “nanocomputers,” including a credit-card-sized Raspberry Pi, a low-cost ARM server, and an Intel Atom computer, can process the same graphs. Our implementations of PageRank and connected components stream graphs from external storage while performing computation in the limited main memory on these nanocomputers. The results show that a $100 computer with an Intel Atom core can compute PageRank and connected components on a 1.5-billion-edge Twitter graph as quickly as graph-processingsystems running on machines with up to 48 cores. As people continue to apply graph computations to large datasets, this research suggests that there may be cost and energy advantages to using nanocomputers.
The most vital requirement in today's world of spoofing attacks is the high security. The development in consumer electronics demands for high security with high accuracy and high speed of authentication. Human be...
详细信息
ISBN:
(纸本)9781509008506
The most vital requirement in today's world of spoofing attacks is the high security. The development in consumer electronics demands for high security with high accuracy and high speed of authentication. Human behavioural and physiological features in biometrics has the large scope as a solution for security issues. However, the existing biometric systems are highly complex in terms of time or space or both, and thus not suitable in very high security. Thus an embedded finger-vein recognition system for authentication is proposed. The system is to be implemented using novel finger vein recognition algorithm and lacunae, fractal dimension and gabor filter are the algorithms used for feature extraction and the matching of the extracted feature is done using the distance classifier. The analysis is done using the various features from which the kurtosis, range shows large variation from person to person. Based on this analysis finger vein recognition becomes easier and reliable.
In this paper the problem of segmentation of volumetric medical images is considered. The fast and effective segmentation is obtained by applying the proposed approach which combines the idea of supervoxels and the Fu...
详细信息
In this paper the problem of segmentation of volumetric medical images is considered. The fast and effective segmentation is obtained by applying the proposed approach which combines the idea of supervoxels and the Fuzzy C-Means algorithm. In particular, Fuzzy C-Means is used to cluster supervoxels produced by the fast 3D region growing. Additional acceleration of the method is achieved with the support of graphical processor (GPU). The detailed description of the proposed approach is given. The results of applying the method to volumetric CT and MRI brain images and CT images of various phantoms are presented, analysed and discussed. The issues related to accuracy of the method, memory workload and the running time are also considered.
New imaging stations aim for high spatial and temporal resolution and are characterized by ever increasing sampling rates and demanding data processing workflows. Key to successful imaging experiments is to open up hi...
详细信息
New imaging stations aim for high spatial and temporal resolution and are characterized by ever increasing sampling rates and demanding data processing workflows. Key to successful imaging experiments is to open up high-performance computing resources. This includes carefully selected components for computing hardware and development of advanced imaging algorithms optimized for efficient use of parallel processor architectures. We present the novel UFO computing platform for online data processing for imaging experiments and image-based feedback. The platform handles the full data life cycle from the X-ray detector to long-term data archives. Core components of this system are an FPGA platform for ultra-fast data acquisition, the GPU-based UFO imageprocessing framework, and the fast control system “Concert”. Reconstruction algorithms implemented in the UFO framework are optimized for the latest GPU architectures and provide a reconstruction throughput in the GB/s-range. The control system “Concert” integrates high-speed computing nodes and fast beamline devices and thus enables image-based control loops and advanced workflow automation for efficient beam time usage. Low latencies are ensured by direct communication between FPGA and GPUs using AMDs DirectGMA technology. Time resolved tomography is supported by cutting edge regularization methods for high quality reconstructions with a reduced number of projections. The new infrastructure at ANKA has dramatically accelerated tomography from hours to second and resulted in new application fields, like high-throughput tomography, pump-probe radiography and stroboscopic tomography. Ultra-fast X-ray cine-tomography for the first time allows one to observe internal dynamics of moving millimeter-sized objects in real-time.
Dictionary learning algorithm facilitates a sparse representation of a given set of training signals, which has significant impact on signal reconstruction error in compressive sensing. To reduce the recovery error ca...
详细信息
ISBN:
(纸本)9781509007691
Dictionary learning algorithm facilitates a sparse representation of a given set of training signals, which has significant impact on signal reconstruction error in compressive sensing. To reduce the recovery error caused by environmental noise, in this paper, a novel structured dictionary learning method for sparse signal representation is presented. The training signals are collected from compressive data gathering methods. And the self-coherence of the dictionary is punished. In comparison with the DCT basis and the K-SVD method, experimental results verify that the proposed dictionary is more effective to alleviate the recovery error caused by environmental noise.
Evolution cause the increase in use of Digital systems and Multimedia which increased the demand for safety of digital multimedia. Thus, there is need of watermarking. Watermark gives the mechanism to determine if a p...
详细信息
ISBN:
(纸本)9781467394178
Evolution cause the increase in use of Digital systems and Multimedia which increased the demand for safety of digital multimedia. Thus, there is need of watermarking. Watermark gives the mechanism to determine if a particular digital media has been copied or not. Here In this paper we present an algorithm for embedding an audio in image based on wavelet transform.
The analysis of the quality of particulate materials is of great importance for a variety of research and industrial applications. Most image-based methods rely on the segmentation of the image to measure the particle...
详细信息
The analysis of the quality of particulate materials is of great importance for a variety of research and industrial applications. Most image-based methods rely on the segmentation of the image to measure the particles and aggregate their characteristics. However, the segmentation of particulate materials can be severely affected when the setup is not controlled. For instance, when there are device errors, changes in the light conditions, or when the camera gets dirty because of the dust or a similar substance. All of these circumstances are common in industrial setups, like the one studied in this paper. This work presents a framework for quality estimation based on imageprocessingalgorithms that avoids segmentation. The considered application scenario is the online quality control of the production of Oriented Strand Boards (OSB), a type of wood panel frequently used in construction and manufacturing industries. The proposed method quantizes frequency domain into a histogram using a non-parametric method, which is later exploited using computational intelligence to classify the quality of superimposed wood particles deposed on a conveyor belt. The method has been tested using synthetic and real images with different noise conditions. The results illustrate the robustness of the approach and its capability to detect significant quality changes in the wood particles.
暂无评论