With the increase in the data storage and data acquisition technologies there is an increase in huge image database. Therefore we need to develop proper and accurate systems to manage this database. Here in this paper...
详细信息
ISBN:
(数字)9783319309330
ISBN:
(纸本)9783319309330;9783319309323
With the increase in the data storage and data acquisition technologies there is an increase in huge image database. Therefore we need to develop proper and accurate systems to manage this database. Here in this paper we focus on the transformation technique to search, browse and retrieve images from large database. Here we have discussed briefly about the CBIR technique for image retrieval using Discrete Cosine Transform for generating feature vector. We have researched on the different retrieval algorithms. The proposed work is experimented over 9000 images from MIRFLIKR Database (Huskies ACM International conference on Multimedia Information Retrieval (MIR'08), 2008 [1]). We have focused on showing the difference between the precision and recall and also the time of different methods and its performance by querying an image from the database and a non-database image.
This work proposes a textual and graphical domain-specific language (DSL) designed especially for modeling and writing data and imageprocessingalgorithms. Since reusing algorithms and other functionality leads to hi...
详细信息
ISBN:
(纸本)9789897582325
This work proposes a textual and graphical domain-specific language (DSL) designed especially for modeling and writing data and imageprocessingalgorithms. Since reusing algorithms and other functionality leads to higher program quality and mostly shorter development time, this approach introduces a novel component-based language design. Special diagrams and structures, such as components, component-diagrams and component-instance-diagrams are introduced. The new language constructs allow an abstract and object-oriented description of data and imageprocessing tasks. Additionally, a compatible graphical design interface is proposed, giving modelers and architects the opportunity to decide which kind of modeling they prefer (graphical or textual, including round-trip engineering).
The paper analyzes well-known automated microscopy systems. The work presents the comparative analysis of low-and middle-designed algorithms. The adaptive module of pre-processing and image segmentation has been worke...
详细信息
ISBN:
(纸本)9786176079132
The paper analyzes well-known automated microscopy systems. The work presents the comparative analysis of low-and middle-designed algorithms. The adaptive module of pre-processing and image segmentation has been worked out.
Hyperspectral images' data structure contains information from spectral ranges far beyond the limits of conventional, visible light imaging devices. This additional information comes at the cost of having unwieldy...
详细信息
ISBN:
(纸本)9781785616525
Hyperspectral images' data structure contains information from spectral ranges far beyond the limits of conventional, visible light imaging devices. This additional information comes at the cost of having unwieldy large image sizes, which presents an important computational resource consideration for many applications. Recent studies have shown that the usual large amount of redundancy in Hyperspectral images can be discarded to avoid classification noise by means of sparse signal processing principles. To improve on classification performance, class-dependent Sparse-Representation classification (cd-SRC) algorithm includes Euclidean distance information between the sparse representation of a sample pixel to be classified and the training classes. The current work describes the use of Manhattan (or City Block) distance for improving the cd-SRC classification procedure. Our results show that classification performance using Euclidean Distance is comparable to the Manhattan Distance metric, which requires a smaller number of significantly less computationally expensive operations. In addition, we show that sparse representation classification has a significant advantage in classification performance compared to established Hyperspectral image classification algorithms, such as the well-known Minimum Euclidean Distance and Support Vector Machine classifiers.
Low-dose Proton Computed Tomography (pCT) is an evolving imaging modality that is used in proton therapy planning which addresses the range uncertainty problem. The goal of pCT is generating a 3D map of relative stopp...
详细信息
ISBN:
(纸本)9781538622834
Low-dose Proton Computed Tomography (pCT) is an evolving imaging modality that is used in proton therapy planning which addresses the range uncertainty problem. The goal of pCT is generating a 3D map of relative stopping power measurements with high accuracy within clinically required time frames. Generating accurate relative stopping power values within the shortest amount of time is considered a key goal when developing an image reconstruction software. The existing image reconstruction softwares have successfully met this time frame and even exceeded this time goal, but require clusters with hundreds of processors. This paper describes a novel reconstruction technique using two graphics processing unit devices. The proposed reconstruction technique is tested on both simulated and experimental datasets and on two different systems namely Nvidia K40 and P100 graphics processing units from IBM and Cray. The experimental results demonstrate that our proposed reconstruction method meets both the timing and accuracy with the benefit of having reasonable cost and efficient use of power.
This paper presents a new approach for quantification of radiographic defects. This approach is based on calculating the size of the pixel using the known image quality indicator present in the radiographic image. Thi...
详细信息
ISBN:
(数字)9788132227557
ISBN:
(纸本)9788132227557;9788132227533
This paper presents a new approach for quantification of radiographic defects. This approach is based on calculating the size of the pixel using the known image quality indicator present in the radiographic image. This method is first applied on the ground truth realities of different shapes whose size is known in advance. The proposed method is then validated with the defect (porosity) where the defect is quantified accurately. The imageprocessing techniques applied on the radiographic image are contrast enhancement, noise reduction and image segmentation to quantify the defects present in the radiographic image. The imageprocessingalgorithms are validated using image quality parameter Mean Square Error (MSE) and Peak Signal to Noise Ratio (PSNR).
image sets and videos can be modeled as subspaces which are actually points on Grassmann manifolds. Clustering of such visual data lying on Grassmann manifolds is a hard issue based on the fact that the state-of-the-a...
详细信息
ISBN:
(纸本)9781538604915
image sets and videos can be modeled as subspaces which are actually points on Grassmann manifolds. Clustering of such visual data lying on Grassmann manifolds is a hard issue based on the fact that the state-of-the-art methods are only applied to vector space instead of non-Euclidean geometry. In this paper, we propose a novel algorithm termed as kernel sparse subspace clustering on the Grassmann manifold (GKSSC) which embeds the Grassmann manifold into a Reproducing Kernel Hilbert Space (RKHS) by an appropriate Gaussian projection kernel. This kernel is applied to obtain kernel sparse representations of data on Grassmann manifolds utilizing the self-expressive property and exploiting the intrinsic Riemannian geometry within data. Although the Grassmann manifold is compact, the geodesic distances between Grassmann points are well measured by kernel sparse representations based on linear reconstruction. With the kernel sparse representations, experimental results of clustering accuracy on the prevalent public dataset outperform state-of-the-art algorithms by more than 90 percent and the robustness of our algorithm is demonstrated as well.
In such computerized systems, such as voice control units, personal identification, IP-telephony, weapon control commands, accepting applications for reference services, automated stenography, recognition of individua...
详细信息
Nowadays there are many computer vision algorithms dedicated to solve the problem of object detection, from many different perspectives. Many of these algorithms take a considerable processing time even for low resolu...
详细信息
Due to high accuracy, inherent redundancy, and embarrassingly parallel nature, the neural networks are fast becoming mainstream machine learning algorithms. However, these advantages come at the cost of high memory an...
详细信息
Due to high accuracy, inherent redundancy, and embarrassingly parallel nature, the neural networks are fast becoming mainstream machine learning algorithms. However, these advantages come at the cost of high memory and processing requirements (that can be met by either GPUs, FPGAs or ASICs). For embedded systems, the requirements are particularly challenging because of stiff power and timing budgets. Due to the availability of efficient mapping tools, GPUs are an appealing platforms to implement the neural networks. While, there is significant work that implements the image recognition (in particular Convolutional Neural Networks) on GPUs, only a few works deal with efficiently implement of speech recognition on GPUs. The work that does focus on implementing speech recognition does not address embedded systems. To tackle this issue, this paper presents SPEED (Open-source framework to accelerate speech recognition on embedded GPUs).We have used Eesen speech recognition framework because it is considered as the most accurate speech recognition technique. Experimental results reveal that the proposed techniques offer 2.6X speedup compared to state of the art.
暂无评论