The research presented in this paper analyzes different algorithms for scheduling a set of potentially interdependent jobs in order to minimize the total runtime, or makespan, when data communication costs are conside...
详细信息
ISBN:
(纸本)9781450341974
The research presented in this paper analyzes different algorithms for scheduling a set of potentially interdependent jobs in order to minimize the total runtime, or makespan, when data communication costs are considered. On distributed file systems, such as the HDFS, files are spread across a cluster of machines. Once a request, such as a query, having as input the data in these files is translated into a set of jobs, these jobs must be scheduled across machines in the cluster. Jobs consume input files stored in the distributed file system or in cluster nodes, and produce output which is potentially consumed by future jobs. If a job needs a particular file as input, the job must either be executed on the same machine, or it must incur a time penalty to copy the file, increasing latency for the job. At the same time, independent jobs are ideally scheduled at the same time on different machines, in order to take advantage of parallelism. Both minimizing communication costs and maximizing parallelism serve to minimize the total runtime of a set of jobs. Furthermore, the problem gets more complex when certain jobs must wait for previous jobs to provide input, as is frequently the case when a set of jobs represents the steps of a query on a distributed database.
The book covers a comprehensive overview of the theory, methods, applications and tools of cognition and recognition. The book is a collection of best selected papers presented in the International conferenceon Cognit...
ISBN:
(纸本)9789811051456
The book covers a comprehensive overview of the theory, methods, applications and tools of cognition and recognition. The book is a collection of best selected papers presented in the International conferenceon Cognition and Recognition 2016 (ICCR 2016) and helpful for scientists and researchers in the field of imageprocessing, pattern recognition and computer vision for advance studies. Nowadays, researchers are working in interdisciplinary areas and the proceedings of ICCR 2016 plays a major role to accumulate those significant works at one place. The chapters included in the proceedings inculcates both theoretical as well as practical aspects of different areas like nature inspired algorithms, fuzzy systems, data mining, signal processing, imageprocessing, text processing, wireless sensor networks, network security and cellular automata.
Large amount of data is one of the most obvious features in satellite based remote sensing systems, which is also a burden for data processing and transmission. The theory of compressive sensing(CS) has been proposed ...
详细信息
ISBN:
(数字)9781510613195
ISBN:
(纸本)9781510613195;9781510613188
Large amount of data is one of the most obvious features in satellite based remote sensing systems, which is also a burden for data processing and transmission. The theory of compressive sensing(CS) has been proposed for almost a decade, and massive experiments show that CS has favorable performance in data compression and recovery, so we apply CS theory to remote sensing images acquisition. In CS, the construction of classical sensing matrix for all sparse signals has to satisfy the Restricted Isometry Property (RIP) strictly, which limits applying CS in practical in image compression. While for remote sensing images, we know some inherent characteristics such as non-negative, smoothness and etc.. Therefore, the goal of this paper is to present a novel measurement matrix that breaks RIP. The new sensing matrix consists of two parts: the standard Nyquist sampling matrix for thumbnails and the conventional CS sampling matrix. Since most of sun-synchronous based satellites fly around the earth 90 minutes and the revisit cycle is also short, lots of previously captured remote sensing images of the same place are available in advance. This drives us to reconstruct remote sensing images through a deep learning approach with those measurements from the new framework. Therefore, we propose a novel deep convolutional neural network (CNN) architecture which takes in undersampsing measurements as input and outputs an intermediate reconstruction image. It is well known that the training procedure to the network costs long time, luckily, the training step can be done only once, which makes the approach attractive for a host of sparse recovery problems.
Conventional skeleton extraction methods require a closed boundary constraint to solve the problem. In natural images closed boundary constraint might not be easily satisfied due to the similarity of the object and it...
详细信息
ISBN:
(纸本)9781509040179
Conventional skeleton extraction methods require a closed boundary constraint to solve the problem. In natural images closed boundary constraint might not be easily satisfied due to the similarity of the object and its background, occlusion, etc. In this paper a novel approach based on the Delaunay triangulation to solve the skeleton extraction in natural images is proposed. The algorithm shows a promising result in extracting the skeleton in natural images.
Consistently, the classical image enlargement algorithm is a scientific analytical method for producing a better refined resolution image that is frequently required for advanced digital imageprocessing (DIP) from a ...
详细信息
ISBN:
(纸本)9781509048090
Consistently, the classical image enlargement algorithm is a scientific analytical method for producing a better refined resolution image that is frequently required for advanced digital imageprocessing (DIP) from a single lower resolution image that is frequently acquired from digital camera embedded system. Because of its less computation calculation, the Single-image Super-Resolution (SISR) that analytically applied for a single lower resolution image is one of the worldwide effective Super Resolution-Reconstruction (SRR) algorithms thus this paper proposes the image enlargement based on the SISR algorithm using high spectrum estimation and Tukey's Biweight constrain function. In general, the performance of this SISR algorithm is hinge on up to three parameters (b, h, k) however there are burdensome for determining these optimized values for these parameters (b, h, k). In order to solve this problem, the Tukey's Biweight constrain function, which is hinge on merely single parameter (T), contrary to three parameters like the classical constrain function, is engaged in the SISR algorithm. By examining on 14 benchmark images, which are profaned by considerable noise forms, in scientific analytical scrutinizing sector, the novel SISR algorithm illustrates that there is efficiently and effortlessly in parameter setting process but the performance of the novel SISR algorithm (with single parameters) is nearly equal to the original SISR (with three parameters). Due to greatly time reduction in the parameter setting process, this novel SISR algorithm is more advisable for real-time applications.
Recent models for learned image compression are based on autoencoders that learn approximately invertible mappings from pixels to a quantized latent representation. The transforms are combined with an entropy model, w...
Recent models for learned image compression are based on autoencoders that learn approximately invertible mappings from pixels to a quantized latent representation. The transforms are combined with an entropy model, which is a prior on the latent representation that can be used with standard arithmetic coding algorithms to generate a compressed bitstream. Recently, hierarchical entropy models were introduced as a way to exploit more structure in the latents than previous fully factorized priors, improving compression performance while maintaining end-to-end optimization. Inspired by the success of autoregressive priors in probabilistic generative models, we examine autoregressive, hierarchical, and combined priors as alternatives, weighing their costs and benefits in the context of image compression. While it is well known that autoregressive models can incur a significant computational penalty, we find that in terms of compression performance, autoregressive and hierarchical priors are complementary and can be combined to exploit the probabilistic structure in the latents better than all previous learned models. The combined model yields state-of-the-art rate–distortion performance and generates smaller files than existing methods: 15.8% rate reductions over the baseline hierarchical model and 59.8%, 35%, and 8.4% savings over JPEG, JPEG2000, and BPG, respectively. To the best of our knowledge, our model is the first learning-based method to outperform the top standard image codec (BPG) on both the PSNR and MS-SSIM distortion metrics.
In this paper we discuss application of delta modulation technique to the modified block truncation coding algorithm. The main idea of the research is to propose a simple improvement to the block truncation coding alg...
详细信息
ISBN:
(纸本)9781538618004
In this paper we discuss application of delta modulation technique to the modified block truncation coding algorithm. The main idea of the research is to propose a simple improvement to the block truncation coding algorithm and to analyze its performance. The proposed hybrid system is tested for a set of standard test grayscale images and the results are compared with the classic method.
In this paper, we propose a novel method for lane detection in real-time based on unsupervised learning using images from a camera mounted on the dashboard of a vehicle. Lane detection is a crucial element of an intel...
详细信息
ISBN:
(纸本)9781538615263
In this paper, we propose a novel method for lane detection in real-time based on unsupervised learning using images from a camera mounted on the dashboard of a vehicle. Lane detection is a crucial element of an intelligent vehicle safety system. Lane departure warning systems rely on accurate detection of lanes and are important for a driver assistant system. In our system, we extract the features of interest and detect the lane markings or foreground regions in each frame. We develop a spatio-temporal incremental clustering algorithm coupled with curve fitting for detecting the lanes on-the-fly. The spatio-temporal incremental clustering is performed over the detected foreground region for finding the lanes accurately. Each cluster represents a lane across space and time. The lanes are then identified by fitting curves on these clusters. Our system is capable of accurately detecting straight and curved lanes, noncontinuous and continuous lanes and is independent of number of lanes in the frame and the system is fast because we have not used any database for processing the images. Experimental results show that our algorithm can accurately perform lane detection in challenging scenarios.
Given the high demand for automated systems for human action recognition, great efforts have been undertaken in recent decades to progress the field. In this paper, we present frameworks for single and multi-viewpoint...
详细信息
ISBN:
(纸本)9781538646595
Given the high demand for automated systems for human action recognition, great efforts have been undertaken in recent decades to progress the field. In this paper, we present frameworks for single and multi-viewpoints action recognition based on Space-Time Volume (STV) of human silhouettes and 3D-Histogram of Oriented Gradient (3D-HOG) embedding. We exploit fast-computational approaches involving Principal Component Analysis (PCA) over the local feature spaces for compactly describing actions as combinations of local gestures and L_2-Regularized Logistic Regression (L2-RLR) for learning the action model from local features. Outperforming results on Weizmann and i3DPost datasets confirm efficacy of the proposed approaches as compared to the baseline method and other works, in terms of accuracy and robustness to appearance changes.
A new technique of Mueller-matrix mapping of polycrystalline structure of histological sections of biological tissues is suggested. The algorithms of reconstruction of distribution of parameters of linear and circular...
详细信息
ISBN:
(数字)9781510612501
ISBN:
(纸本)9781510612501;9781510612495
A new technique of Mueller-matrix mapping of polycrystalline structure of histological sections of biological tissues is suggested. The algorithms of reconstruction of distribution of parameters of linear and circular dichroism of histological sections liver tissue of mice with different degrees of severity of diabetes are found. The interconnections between such distributions and parameters of linear and circular dichroism of liver of mice tissue histological sections are defined. The comparative investigations of coordinate distributions of parameters of amplitude anisotropy formed by Liver tissue with varying severity of diabetes (10 days and 24 days) are performed. The values and ranges of change of the statistical (moments of the 1st - 4th order) parameters of coordinate distributions of the value of linear and circular dichroism are defined. The objective criteria of cause of the degree of severity of the diabetes differentiation are determined.
暂无评论