This paper considers the identification of Wiener-Hammerstein systems using Least-Squares Support Vector Machines based models. The power of fully black-box NARX-type models is evaluated and compared with models incor...
详细信息
This paper considers the identification of Wiener-Hammerstein systems using Least-Squares Support Vector Machines based models. The power of fully black-box NARX-type models is evaluated and compared with models incorporating information about the structure of the systems. For the NARX models it is shown how to extend the kernel-based estimator to largedata sets. For the structured model the emphasis is on preserving the convexity of the estimation problem through a suitable relaxation of the original problem. To develop an empirical understanding of the implications of the different model design choices, all considered models are compared on an artificial system under a number of different experimental conditions. The obtained results are then validated on the Wiener-Hammerstein benchmark data set and the final models are presented. It is illustrated that black-box models are a suitable technique for the identification of Wiener-Hammerstein systems. The incorporation of structural information results in significant improvements in modeling performance. (C) 2012 Elsevier Ltd. All rights reserved.
MapReduce is becoming a leading large-scale data processing model providing a logical framework for cloud computing. Hadoop, an open-source implementation of MapReduce framework, is widely used for realize such kind o...
详细信息
ISBN:
(纸本)9781467321433
MapReduce is becoming a leading large-scale data processing model providing a logical framework for cloud computing. Hadoop, an open-source implementation of MapReduce framework, is widely used for realize such kind of parallel computing model. Nodes in the current Hadoop environment are normally homogeneous. Efficient resource management in clouds is crucial for improving the performance of MapReduce applications and the utilization of resources. However, the original scheduling scheme in Hadoop assign tasks to each node based on the fixed and static number of slots, without considering the physical workload on each node, such as the CPU utilization. This paper aims at proposing a dynamic slot-based task scheduling scheme by considering the physical workload on each node so as to prevent resource underutilization. The evaluation results show the proposed scheme can raise the overall computation efficiency among the heterogeneous nodes in cloud.
This paper proposed a type of data structure, Spectral Angle Sensitive Forest (SASF), which was designed for indexing and matching Hyperspectral data with spectral angle metric at low computational cost. In this paper...
详细信息
ISBN:
(纸本)9781479934065
This paper proposed a type of data structure, Spectral Angle Sensitive Forest (SASF), which was designed for indexing and matching Hyperspectral data with spectral angle metric at low computational cost. In this paper, we theoretically and experimentally proved that this new method outperformed the traditional data structure (such as Vantage Point Tree) used for high dimensional dataset, and overcome the problem of Locality Sensitive Hashing algorithm that the query data could not get matching results in certain probability. By adjusting a few parameters, SASF is convenient for users to choose between matching speed and matching accuracy in the applications.
Cloud computing features a flexible computing infrastructure for large-scale data processing. MapReduce is a typical model providing an logical framework for cloud computing and Hadoop, an open-source implementation o...
详细信息
ISBN:
(纸本)9781467347280
Cloud computing features a flexible computing infrastructure for large-scale data processing. MapReduce is a typical model providing an logical framework for cloud computing and Hadoop, an open-source implementation of MapReduce, is a common platform to realize such kind of parallel computing model. Normally, a cloud computing service comprises many heterogeneous commodity machines. The original resource arrangement policy in Hadoop only focuses on the logical resources, such as free slot number, without considering the physical workload of comprehensive computing resources, such as the CPU utilization, network bandwidth, memory usuage on each working node. This paper aims at dispatching the computation load to all processing nodes in the cloud computing environment by considering the physical workload on each node so as to prevent bias in arranging computation resources and hence improve the overall computing performance in a heterogeneous cloud environment.
暂无评论