Support Vector Machine (SVM) classifiers are widely used to analyse features extracted from brain MRI data to identify useful biomarkers of pathology in several disease conditions. They are trained to distinguish pati...
详细信息
The proceedings contain 31 papers. The special focus in this conference is on imageprocessing, Fault-Tolerant systems, Tools and Architectures. The topics include: A design methodology for the next generation real-ti...
ISBN:
(纸本)9783319304809
The proceedings contain 31 papers. The special focus in this conference is on imageprocessing, Fault-Tolerant systems, Tools and Architectures. The topics include: A design methodology for the next generation real-time vision processors;EEG feature extraction accelerator enabling long term epilepsy monitoring based on ultra low power WSNS;computing to the limit with heterogeneous CPU-FPGA devices in a video fusion application;an efficient hardware architecture for block based imageprocessingalgorithms;an FPGA stereo matching processor based on the sum of hamming distances;a comparison of machine learning classifiers for FPGA implementation of hog-based human detection;a scalable dataflow accelerator for real time onboard hyperspectral image classification;a redundant design approach with diversity of FPGA resource mapping;method to analyze the susceptibility of HLS designs in SRAM-based FPGAS under soft errors;low cost dynamic scrubbing for real-time systems;new partitioning approach for hardware Trojan detection using side-channel measurements;a comprehensive set of schemes for PUF response generation;design and optimization of digital circuits by artificial evolution using hybrid multi chromosome Cartesian genetic programming;a multi-codec framework to enhance data channels in FPGA streaming systems;reconfigurable FPGA-based FFT processor for cognitive radio applications;real-time audio group delay correction with FFT convolution on FPGA;evaluating schedulers in a reconfigurable multicore heterogeneous system and fast and resource aware imageprocessing operators utilizing highly configurable IP blocks.
In this paper, General Purpose Graphical processing Unit (GPGPU) based concurrent implementation of handwritten digit classifier is presented. Different styles of handwriting make it difficult to recognize a pattern b...
详细信息
ISBN:
(纸本)9781509055869
In this paper, General Purpose Graphical processing Unit (GPGPU) based concurrent implementation of handwritten digit classifier is presented. Different styles of handwriting make it difficult to recognize a pattern but using neural network, it is not a difficult task to perform. Different softwares like torch and MATLAB provide the support of multiple training algorithms to train a network. By choosing an appropriate training algorithm for a specific application, speed of training can be increased. Furthermore, using computational power of GPUs, training and classification speed of neural network can be significantly improved. In this work, Modified National Institute of Standards and Technology (MNIST) database of handwritten digits is used to train the network. Accuracy and training time of digit classifier is evaluated for different algorithms and then concurrent training is performed by exploiting power of GPU. Trained parameters are imported and used for the concurrent classification with Compute Unified Device Architecture (CUDA) computing language which can be useful in numerous practical applications. Finally, the results of sequential and concurrent operations of training and classification are compared.
Content based video retrival systems requires video to be segmented into objects. A large number of video segmentation algorithms have been proposed such as semi-automatic and automatic. Semiautomatic methods requires...
详细信息
The paper proposes a novel framework for 3D face verification using dimensionality reduction based on highly distinctive local features in the presence of illumination and expression variations. The histograms of effi...
详细信息
The paper proposes a novel framework for 3D face verification using dimensionality reduction based on highly distinctive local features in the presence of illumination and expression variations. The histograms of efficient local descriptors are used to represent distinctively the facial images. For this purpose, different local descriptors are evaluated, Local Binary Patterns (LBP), Three-Patch Local Binary Patterns (TPLBP), Four-Patch Local Binary Patterns (FPLBP), Binarized Statistical image Features (BSIF) and Local Phase Quantization (LPQ). Furthermore, experiments on the combinations of the four local descriptors at feature level using simply histograms concatenation are provided. The performance of the proposed approach is evaluated with different dimensionality reduction algorithms: Principal Component Analysis (PCA), Orthogonal Locality Preserving Projection (OLPP) and the combined PCA+EFM (Enhanced Fisher linear discriminate Model). Finally, multi-class Support Vector Machine (SVM) is used as a classifier to carry out the verification between imposters and customers. The proposed method has been tested on CASIA-3D face database and the experimental results show that our method achieves a high verification performance.
For many imageprocessing workflows, including change detection and data fusion, an accurate and automated image-To-image registration is a critical precondition. Particularly registering images with different modalit...
详细信息
Digital photography has experienced great progress during the past decade. A lot of people are recording their moments via digital hand-held cameras. Pictures taken with digital cameras usually undergo some sort of de...
Digital photography has experienced great progress during the past decade. A lot of people are recording their moments via digital hand-held cameras. Pictures taken with digital cameras usually undergo some sort of degradation in the form of noise/blur depending on the camera hardware and environmental conditions in which the photos are taken. This leads to an ever-increasing demand for effective and efficient image enhancement algorithms to achieve high quality output images in digital photography systems. In this dissertation, a new graph-based framework is introduced for different image restoration applications. This framework is based on exploiting the existing self-similarity in images. We introduce a new definition of normalized graph Laplacian matrix for imageprocessing. We use this new definition to develop effective enhancement algorithms for image deblurring, image denoising, and image sharpening. First, we develop a regularization framework for image deblurring by constructing a new graph-based cost function. Minimizing the corresponding cost function yields effective outputs for different blur types including out-of-focus and motion blurs. Our proposed deblurring algorithm based on the new definition of normalized graph Laplacian provides performance and analysis advantages over previous methods. We have shown its effectiveness for several synthetic and real deblurring examples. Second, we develop a new graph-based framework for image denoising. The proposed denoising method exploits the similarity information in images by constructing the similarity matrix which in turn is used to derive the corresponding graph Laplacian. A graph-based objective function with new data fidelity and smoothness terms is constructed and minimized. We also establish the relationship between our proposed regularized framework and two well-known iterative methods for improving the performance of kernel-based denoising methods; namely, diffusion and boosting iterations. We com
Current X-ray machines use lower radiation doses which introduces noise to the output images. Therefore such systems need to enhance the image and reduce the noise via different algorithms to provide the best possible...
Current X-ray machines use lower radiation doses which introduces noise to the output images. Therefore such systems need to enhance the image and reduce the noise via different algorithms to provide the best possible output. In addition, it is crucial to accelerate these imageprocessingalgorithms as the output is intended to be a real time video (uoroscopy). Such systems are used for example in surgeries for implants or other medical examinations and there is a need to provide constant performance, otherwise they may lead to injuries or fatalities due to latency issues. Currently, such systems often rely on server PCs to implement the imageprocessing chains. Since PC hardware needs to be replaced regularly during the lifetime of an X-ray machine, this increases the maintenance cost as well as the overall cost of the machine significantly. Therefore, we need to provide a framework that would allow us to develop the algorithm only once and then enable us to port it to a new platform, while the performance is ensured. In order to do so, a high performance framework solution was investigated. A number of alternative solutions were investigated and the most attractive framework was selected to be the Open Computing Language (OpenCL). OpenCL provides the means to develop the imageprocessing algorithm once and port it to different platforms, changing only the target platform from the OpenCL API. During this thesis exploration we were able to redevelop a high quality algorithm provided by Philips Healthcare from a Matlab model to OpenCL in an optimal time period, while we investigated portability and performance. We first developed a tool chain that enables transformation from Matlab to OpenCL. Furthermore, 11 imageprocessing kernels which constitute the algorithm were developed in OpenCL performing a speedup of up to 150x in some cases. We were able to run the algorithm on three different hardware platforms using the same OpenCL kernels and achieve a speedup up to of
A reconfigurable computing architecture based on Field Programmable Gate Array (FPGA) technology is implemented for the Electrical Capacitance Tomography (ECT) system. The ECT system is used to image the multi-phase f...
详细信息
ISBN:
(纸本)9781509034741
A reconfigurable computing architecture based on Field Programmable Gate Array (FPGA) technology is implemented for the Electrical Capacitance Tomography (ECT) system. The ECT system is used to image the multi-phase flow when gas/liquid or solid/liquid phases occurs. In the ECT systems, an exhaustive computational image reconstruction algorithm has to vastly processed large amount of data. The software algorithms and hardware parameters are adjusted based on a Hardware software codesign process using commercially available tools. The hardware system consists of capacitive sensors, wireless nodes and FPGA module. Rr4wesults show that implementing the ECT image reconstruction algorithm on the FPGA platform achives fast performance and small design density.
Indian languages have very less linguistic resources, though they have a large speaker base. They are very rich in morphology, making it very difficult to do sequential tagging or any type of language analysis. In nat...
详细信息
ISBN:
(纸本)9788132225263;9788132225256
Indian languages have very less linguistic resources, though they have a large speaker base. They are very rich in morphology, making it very difficult to do sequential tagging or any type of language analysis. In natural language processing, parts-of-speech (POS) tagging is the basic tool with which it is possible to extract terminology using linguistic patterns. The main aim of this research is to do sequential tagging for Indian languages based on the unsupervised features and distributional information of a word with its neighboring words. The results of the machine learning algorithms depend on the data representation. Not all the data contribute to creation of the model, leading a few in vain and it depends on the descriptive factors of data disparity. Data representations are designed by using domain-specific knowledge but the aim of Artificial Intelligence is to reduce these domain-dependent representations, so that it can be applied to the domains which are new to one. Recently, deep learning algorithms have acquired a substantial interest in reducing the dimension of features or extracting the latent features. Recent development and applications of deep learning algorithms are giving impressive results in several areas mostly in image and text applications.
暂无评论