We define a nonlinear generalization of the singular value decomposition (SVD), which can be interpreted as a restricted SVD with Riemannian metrics in the column and row space. This so-called Riemannian SVD occurs in...
详细信息
We define a nonlinear generalization of the singular value decomposition (SVD), which can be interpreted as a restricted SVD with Riemannian metrics in the column and row space. This so-called Riemannian SVD occurs in structured total least squares problems, for instance in the least squares approximation of a given matrix A by a rank deficient Hankel matrix B, which is an important problem in system identification and signal processing. Several algorithms to find the 'minimizing' singular triplet are suggested, both for the SVD and its nonlinear generalization. This paper reveals interesting connections between linear algebra (structured matrix problems), numerical analysis (algorithms), optimization theory, (differential) geometry and system theory (differential equations, stability, Lyapunov functions). We give some numerical examples and also point out some open problems.
We exhibit three approximation algorithms for the allocation problem in combinatorial auctions with complement free bidders. The running time of these algorithms is polynomial in the number of items m and in the numbe...
详细信息
ISBN:
(纸本)9781581139600
We exhibit three approximation algorithms for the allocation problem in combinatorial auctions with complement free bidders. The running time of these algorithms is polynomial in the number of items m and in the number of bidders n, even though the "input size" is exponential in m. The first algorithm provides an O(log m) approximation. The second algorithm provides an O(√m) approximation in the weaker model of value oracles. This algorithm is also incentive compatible. The third algorithm provides an improved 2-approximation for the more restricted case of "XOS bidders", a class which strictly contains submodular bidders. We also prove lower bounds on the possible approximations achievable for these classes of bidders. These bounds are not tight and we leave the gaps as open problems. Copyright 2005 ACM.
Hierarchical neural network approaches have been developed first for combining high and low frequency (HF and LF) Side Scan Sonar imagery, and then for the combination of both acoustic images and Magnetic data. The ad...
详细信息
ISBN:
(纸本)0819421464
Hierarchical neural network approaches have been developed first for combining high and low frequency (HF and LF) Side Scan Sonar imagery, and then for the combination of both acoustic images and Magnetic data. The adopted acoustic data fusion approach consists in a image-screening/HF, LF blob matching stage, followed by an information fusion/classification stage. Three variants of the information fusion/classification algorithm were conceived and evaluated based on `aggregate-feature-combining', `neural-network-discriminant-combining', and individual classifier `decision-based-combining', respectively. The `discriminant- combining' case yielded the best classification performance, and when compared with individual HF, LF classifier performance resulted in at least an order of magnitude reduction in the density of false alarms. Next, results are obtained for combining both acoustic and magnetic data using the described high and low frequency side scan sonar discriminant combining fusion algorithm as a starting point. In the next step, acoustic image pair `tokens' are associated with magnetic `tokens', resulting in three classes of resulting `tokens': `associated' acoustic-pair and magnetic tokens, isolated acoustic-pair tokens, and isolated magnetic `tokens'. Neural network output discriminants are derived for each of the three types of tokens mentioned above, and are employed to make classification decisions. The resulting Detection/Classification Algorithm is evaluated based on a combined ground truth obtained from both acoustic and magnetic sources.
The authors examine the problem of incrementally evaluating algebraic functions. The paper presents both lower bounds and algorithm design techniques for algebraic problems. The first presentation deals with the lower...
详细信息
ISBN:
(纸本)0898713293
The authors examine the problem of incrementally evaluating algebraic functions. The paper presents both lower bounds and algorithm design techniques for algebraic problems. The first presentation deals with the lower bounds for simply stated algebraic problems: multipoint polynomial evaluation, polynomial reciprocal, and extended polynomial GCD. The second deals with two general purpose techniques for designing incremental algorithms. The first method can produce highly efficient incremental algorithms and the second method gives a slightly slower incremental algorithm for these problems but can be applicable to a wider class of problems than the first method.
In this paper, we present a performance study on three different load balancing algorithms. The first algorithm employs only task assignment, whereas the other two allow both task assignment and migration. We conclude...
详细信息
In this paper, we present a performance study on three different load balancing algorithms. The first algorithm employs only task assignment, whereas the other two allow both task assignment and migration. We conclude that although task migration usually costs more than task assignment, under some situations it can augment task assignment to provide extra performance improvement. This is because task migration provides an alternate mechanism for distributing workload in a distributed system. The performance improvement by using this approach is especially significant when a heavily-loaded node has no appropriate tasks for assignment.
Connectionist constructive learning dynamically constructs a network to balance the complexity of the network topology with the complexity of the function specified by the training data. In order to evaluate the quali...
详细信息
ISBN:
(纸本)078031901X
Connectionist constructive learning dynamically constructs a network to balance the complexity of the network topology with the complexity of the function specified by the training data. In order to evaluate the quality of a constructive learning algorithm, not only the learning efficiency of the algorithm need to be measured, but also the topological complexity of the constructed network has to be examined. This paper discusses both the learning speeds and the network sizes of constructive learning algorithms. As the backprop requires more nodes than necessary for the network to converge, it is used as a reference to measure the complexity of constructive networks. Experiments using two constructive algorithms, cascade correlation and stack, indicates that the network built by constructive learning algorithms can have less complexity than the network required by the backprop algorithm.
In the framework of PAC-learning model, relationships between learning processes and information compressing processes are investigated. Information compressing processes are formulated as weak Occam algorithms. A wea...
详细信息
ISBN:
(纸本)0897916115
In the framework of PAC-learning model, relationships between learning processes and information compressing processes are investigated. Information compressing processes are formulated as weak Occam algorithms. A weak Occam algorithm is a deterministic polynomial time algorithm that, when given m examples of unknown function, outputs, with high probability, a representation of a function that is consistent with the examples and belongs to a function class with complexity o(m). It has been shown that a weak Occam algorithm is also a consistent PAC-learning algorithm. In this extended abstract, it is shown that the converse does not hold by giving a PAC-learning algorithm that is not a weak Occam algorithm, and also some natural properties, called conservativeness and monotonicity, for learning algorithms that might help the converse hold are given. In particular, the conditions that make a conservative PAC-learning algorithm a weak Occam algorithm are given, and it is shown that, under some natural conditions, a monotone PAC-learning algorithm for a hypothesis class can be transformed to a weak Occam algorithm without changing the hypothesis class.
Land surface temperature (LSI) retrieval from NOAA-AVHRR data is mainly through so-called split window algorithms. During the last 20 years 17 split window algorithms has been published. These algorithms can be groupe...
详细信息
Land surface temperature (LSI) retrieval from NOAA-AVHRR data is mainly through so-called split window algorithms. During the last 20 years 17 split window algorithms has been published. These algorithms can be grouped into four categories: emissivity-dependent models, two-factors models, complicated models and radiance model. In this paper we intend to compare these split window algorithms in terms of their computation and accuracy. Two methods are used for the comparison: ground datasets and simulation datasets. Results from comparison shows that different algorithms have different performances under different situations. For simulation datasets. the algorithms of Qin et al. and Sobrino et al. are the best. The average root mean square (RMS) error of the two algorithms is less than 0.3°C. The algorithms of França and Cracknell, Prata and Uliverir et al. also have very low RMS errors (0.5-0.7°C). Results from comparison with ground datasets indicates that the algorithms of Qin et al. and Sobrino et al. are among the best for the dataset without precise in situ atmospheric water vapor contents. These algorithms are able to provide LST retrieval with average RMS error less than 1.9°C for the 361 measurements of the two Australian sites. An obvious contrast to the generally higher RMS error for the dataset is the much lower RMS error of the algorithms for the intensive experiments with precise in situ atmospheric water vapor contents. Based on the above two methods for comparison, it can be concluded that, comprehensively, the algorithm of Qin et al. is the best alternative for LST retrieval from AVHRR, followed by Sobrino et al., França and Cracknell, and Prata when data are available to estimate both emissivity and transmittance.
The Hard and Fuzzy C-Means algorithms are commonly used in many applications. However, they are highly sensitive to noise and outliers. In this paper, we reformulate the Hard and Fuzzy C-Means algorithms and combine t...
详细信息
The Hard and Fuzzy C-Means algorithms are commonly used in many applications. However, they are highly sensitive to noise and outliers. In this paper, we reformulate the Hard and Fuzzy C-Means algorithms and combine them with a robust estimator called the Least Trimmed Squares to produce robust versions of these algorithms. To find the optimum trimming ratio of the data set and to eliminate the noise from the data set, we develop an unsupervised algorithm based on a cluster validity measure. We illustrate the robustness of these algorithm with examples.
Six distributed network restoration algorithms are analyzed using a set of important performance metrics and functional characteristics. The functional characteristics are used to explain how these algorithms function...
详细信息
ISBN:
(纸本)0780309170
Six distributed network restoration algorithms are analyzed using a set of important performance metrics and functional characteristics. The functional characteristics are used to explain how these algorithms function and provide insight into their performance. The analysis and simulation results indicate that the Two Prong network restoration algorithm, which is based on issuing aggregate restoration requests from both ends of the disruption and on an intelligent backtracking mechanism, outperforms the other algorithms.
暂无评论