The work is devoted to the synthesis and investigation of parallel algorithm for a finite difference solution of the Poisson equation using the Jacobi method. For example, two-dimensional case demonstrates the efficac...
详细信息
The work is devoted to the synthesis and investigation of parallel algorithm for a finite difference solution of the Poisson equation using the Jacobi method. For example, two-dimensional case demonstrates the efficacy of the method of the pyramids in the synthesis of said algorithm.
Segmentation is important to define the spatial extension of body anatomic structures (objects) in medical images for quantitative analysis. In this context, it is desirable to eliminate (at least minimize) user inter...
详细信息
ISBN:
(纸本)9781138029262
Segmentation is important to define the spatial extension of body anatomic structures (objects) in medical images for quantitative analysis. In this context, it is desirable to eliminate (at least minimize) user interaction. This aim is feasible by combining object delineation algorithms with Object Shape Models (OSMs). While the former can better capture the actual shape of the object in the image, the latter provides shape constraints to assist its location and delineation. We review two important classes of OSMs for medical image segmentation: Statistical (SOSMs) and Fuzzy (FOSMs). SOSMs rely on the image mapping onto a reference coordinate system, which indicates the probability of each voxel to be in the object (a probabilistic atlas built from a set of training images and their segmentation masks). Imperfect mappings due to shape and texture variations asks for object delineation algorithms, but the methods usually assume that the atlas is at the best position for delineation. Multiple atlases per object can mitigate the problem and a recent trend is to use each training mask as an individual atlas. By mapping them onto the coordinate system of a new image, object delineation can be accomplished by label fusion. However, the processing time for deformable registration is critical to make SOSMs suitable for large scale studies. FOSMs appear as a recent alternative to avoid reference systems (deformable registration) by translating the training masks to a common reference point for model construction. This relaxes the shape constraints, but asks for a more effective object delineation algorithm and some efficient approach for object’s location. One of the solutions, named optimum object search, translates the model inside an estimated search region in the image while a criterion function guides translation and determines the best delineated object among candidates. This makes segmentation with FOSMs considerably faster than with SOSMs, but SOSMs that adopt the o
As massive open online courses (MOOCs) and online intelligent tutoring systems(ITS) have become increasingly widespread, the number of learners enrolled in online courses has shown explosive growth. However, these lea...
详细信息
Steganalysis is capable of identifying the carrier(s) which have information hidden in them in such a way that their very existence is concealed. In this paper we propose a classification system with neural networks w...
详细信息
ISBN:
(纸本)9781509010226
Steganalysis is capable of identifying the carrier(s) which have information hidden in them in such a way that their very existence is concealed. In this paper we propose a classification system with neural networks which reduces computational complexity through a pre-processing step (feature selection) performed by Bhattacharyya distance for image steganalysis. This approach is able to identify relevant features which are a subset of original features extracted from spatial as well as transform domain. It helps in overcoming the problem of "curse of dimensionalty" by removing redundant features by feature selection step before classifying the dataset. The experiments are performed on dataset obtained by four steganography algorithms outguess, steghide, PQ and nsF5 with two classifiers Support vector Machine and Back Propagation neural networks. Classifier in combination with Bhattacharyya distance filter feature selection approach shows an improvement of 2-20% against total number of features.
Recently, Physically Unclonable Functions (PUFs) received considerable attention in order to developing security mechanisms for applications such as Internet of Things (IoT) by exploiting the natural randomness in dev...
详细信息
ISBN:
(纸本)9781509054442
Recently, Physically Unclonable Functions (PUFs) received considerable attention in order to developing security mechanisms for applications such as Internet of Things (IoT) by exploiting the natural randomness in device-specific characteristics. This approach complements and improves the conventional security algorithms that are vulnerable to security attacks due to recent advances in computational technology and fully automated hacking systems. In this project, we propose a new authentication mechanism based on a specific implementation of PUF using metallic dendrites. Dendrites are nanomaterial devices that contain unique, complex and unclonable patterns (similar to human DNAs). We propose a method to process dendrite images. The proposed framework comprises several steps including denoising, skeletonizing, pruning and feature points extraction. The feature points are represented in terms of a tree-based weighted algorithm that converts the authentication problem to a graph matching problem. The test object is compared against a database of valid patterns using a novel algorithm to perform user identification and authentication. The proposed method demonstrates a high level of accuracy and a low computational complexity that grows linearly with the number of extracted points and database size. It also significantly reduces the required in-network storage capacity and communication rates to maintain database of users in large-scale networks.
This paper describes a framework for temporally consistent video completion. Proposed method allow to remove dynamic objects or restore missing or tainted regions present in a video sequence by utilizing spatial and t...
详细信息
Indian languages have very less linguistic resources, though they have a large speaker base. They are very rich in morphology, making it very difficult to do sequential tagging or any type of language analysis. In nat...
详细信息
ISBN:
(纸本)9788132225263;9788132225256
Indian languages have very less linguistic resources, though they have a large speaker base. They are very rich in morphology, making it very difficult to do sequential tagging or any type of language analysis. In natural language processing, parts-of-speech (POS) tagging is the basic tool with which it is possible to extract terminology using linguistic patterns. The main aim of this research is to do sequential tagging for Indian languages based on the unsupervised features and distributional information of a word with its neighboring words. The results of the machine learning algorithms depend on the data representation. Not all the data contribute to creation of the model, leading a few in vain and it depends on the descriptive factors of data disparity. Data representations are designed by using domain-specific knowledge but the aim of Artificial Intelligence is to reduce these domain-dependent representations, so that it can be applied to the domains which are new to one. Recently, deep learning algorithms have acquired a substantial interest in reducing the dimension of features or extracting the latent features. Recent development and applications of deep learning algorithms are giving impressive results in several areas mostly in image and text applications.
Mammographic Computer-Aided Diagnosis systems are applications designed to assist radiologists in diagnosis of malignancy in mammographic findings. Most methods described in the literature do not perform a proper prep...
详细信息
ISBN:
(纸本)9781509035687
Mammographic Computer-Aided Diagnosis systems are applications designed to assist radiologists in diagnosis of malignancy in mammographic findings. Most methods described in the literature do not perform a proper preprocessing step in mammographic images prior to classification, which can generate inconsistent results due to the potentially large amount of noise in medical images. This paper proposes a new method based on Information Theory and Data Compression for detection of random noise in image bit planes. In order to validate the efficiency of the proposed noise removal method, we used Machine Learning algorithms to classify mammographic findings from the Digital Database for Screening Mammography. Results using texture features indicate that a reduction in the radiometric resolution of 4 or 5 bit planes in digitized screen film mammographic images result in a better classification performance.
Studies in biological vision have always been a great source of inspiration for design of computer vision algorithms. In the past, several successful methods were designed with varying degrees of correspondence with b...
详细信息
Studies in biological vision have always been a great source of inspiration for design of computer vision algorithms. In the past, several successful methods were designed with varying degrees of correspondence with biological vision studies, ranging from purely functional inspiration to methods that utilise models that were primarily developed for explaining biological observations. Even though it seems well recognised that computational models of biological vision can help in design of computer vision algorithms, it is a non-trivial exercise for a computer vision researcher to mine relevant information from biological vision literature as very few studies in biology are organised at a task level. In this paper we aim to bridge this gap by providing a computer vision task centric presentation of models primarily originating in biological vision studies. Not only do we revisit some of the main features of biological vision and discuss the foundations of existing computational studies modelling biological vision, but also we consider three classical computer vision tasks from a biological perspective: image sensing, segmentation and optical flow. Using this task-centric approach, we discuss well-known biological functional principles and compare them with approaches taken by computer vision. Based on this comparative analysis of computer and biological vision, we present some recent models in biological vision and highlight a few models that we think are promising for future investigations in computer vision. To this extent, this paper provides new insights and a starting point for investigators interested in the design of biology-based computer vision algorithms and pave a way for much needed interaction between the two communities leading to the development of synergistic models of artificial and biological vision. (C) 2016 The Authors. Published by Elsevier Inc. This is an open access article under the CC BY license.
In this article we propose linear time algorithm for contour smoothing, based on finding extremes. First we find vertexes where contour convexity changes, than obtain local minimums, maximums and points of support, wh...
详细信息
ISBN:
(纸本)9789898533524
In this article we propose linear time algorithm for contour smoothing, based on finding extremes. First we find vertexes where contour convexity changes, than obtain local minimums, maximums and points of support, which should be used in resulting contour. The main goal of proposed approach is to compute accurate interior area of a bounding contour, it was successfully applied for recovering object contour after segmentation algorithms or human annotations, for contours noise reduction after jpeg compression.
暂无评论