This work deals with developing two fast randomized algorithms for computing the generalized tensor singular value decomposition (GTSVD) based on the tensor product (T-product). The random projection method is utilize...
详细信息
This work deals with developing two fast randomized algorithms for computing the generalized tensor singular value decomposition (GTSVD) based on the tensor product (T-product). The random projection method is utilized to compute the important actions of the underlying data tensors and use them to get small sketches of the original data tensors, which are easier to handle. Due to the small size of the tensor sketches, deterministic approaches are applied to them to compute their GTSVD. Then, from the GTSVD of the small tensor sketches, the GTSVD of the original large-scale data tensors is recovered. Some experiments are conducted to show the effectiveness of the proposed approach.
In this paper, we focus on the fixed-precision problem for the approximate Tucker decomposition of any tensor. First, we modify several structured matrices for the adaptive randomized range finder algorithm in [W. Yu,...
详细信息
In this paper, we focus on the fixed-precision problem for the approximate Tucker decomposition of any tensor. First, we modify several structured matrices for the adaptive randomized range finder algorithm in [W. Yu, Y. Gu, and Y. Li, SIAM J. Matrix Anal. Appl., 39 (2018), pp. 1339--1359], when the standard Gaussian matrices are replaced by uniform random matrices, the Khatri--Rao product of the standard Gaussian matrices (or the uniform random matrices). Second, by using this modified algorithm for each mode unfolding of the input/intermediate tensor, we obtain the adaptive randomized variants for T-HOSVD and ST-HOSVD. Third, we propose theoretical properties for these adaptive randomized variants. Finally, numerical examples illustrate that for a given tolerance, the proposed algorithms are superior to other algorithms in terms of relative error, desired Tucker rank, and running time.
Both randomized algorithms and deep learning techniques have been successfully used for regression and classification problems. However, the random hidden weights of the randomized algorithms require suitable distribu...
详细信息
Both randomized algorithms and deep learning techniques have been successfully used for regression and classification problems. However, the random hidden weights of the randomized algorithms require suitable distributions in advance, and the deep learning methods do not use the output information in system identification. In this paper, the distributions of the hidden weights are obtained by the restricted Boltzmann machines. This deep learning method uses, input data to construct the statistical features of the hidden weights. The output weights of the neural model are trained by normal randomized algorithms. So we successfully combine the unsupervised training (deep learning) and the supervised learning method (randomized algorithm), and take advantages from both of them. The proposed randomized algorithms with deep learning modification are validated with three benchmark problems. (C) 2015 Elsevier Inc. All rights reserved.
In this paper, we consider the estimation algorithm of tensor response on vector covariate regression model. Based on projection theory of tensor and the idea of randomized algorithm for tensor decomposition, three ne...
详细信息
In this paper, we consider the estimation algorithm of tensor response on vector covariate regression model. Based on projection theory of tensor and the idea of randomized algorithm for tensor decomposition, three new algorithms named SHOLRR, RHOLRR and RSHOLRR are proposed under the low-rank Tucker decomposition and some theoretical analyses for two randomized algorithms are also provided. To explore the nonlinear relationship between tensor response and vector covariate, we develop the KRSHOLRR algorithm based on kernel trick and RSHOLRR algorithm. Our proposed algorithms can not only guarantee high estimation accuracy but also have the advantage of fast computing speed, especially for higher-order tensor response. Through extensive synthesized data analyses and applications to two real datasets, we demonstrate the outperformance of our proposed algorithms over the stat-of-art.
The inference of a lexicographic rule from paired comparisons, ranking, or choice data is a discrete optimization problem that generalizes the linear ordering problem. We develop an approach to its solution using rand...
详细信息
The inference of a lexicographic rule from paired comparisons, ranking, or choice data is a discrete optimization problem that generalizes the linear ordering problem. We develop an approach to its solution using randomized algorithms. First, we show that maximizing the expected value of a randomized solution is equivalent to solving the lexicographic inference problem. As a result, the discrete problem is transformed into a continuous and unconstrained nonlinear program that can be solved, possibly only to a local optimum, using nonlinear optimization methods. Second, we show that a maximum likelihood procedure, which runs in polynomial time, can be used to implement the randomized algorithm. The maximum likelihood value determines a lower bound on the performance ratio of the randomized algorithm. We employ the proposed approach to infer lexicographic rules for individuals using data from a choice experiment for electronic tablets. These rules obtain substantially better fit and predictions than a previously described greedy algorithm, a local search algorithm, and a multinomial logit model.
The tensor-train (TT) format is a highly compact low-rank representation for highdimensional tensors. TT is particularly useful when representing approximations to the solutions of certain types of parametrized partia...
详细信息
The tensor-train (TT) format is a highly compact low-rank representation for highdimensional tensors. TT is particularly useful when representing approximations to the solutions of certain types of parametrized partial differential equations. For many of these problems, computing the solution explicitly would require an infeasible amount of memory and computational time. While the TT format makes these problems tractable, iterative techniques for solving the PDEs must be adapted to perform arithmetic while maintaining the implicit structure. The fundamental operation used to maintain feasible memory and computational time is called rounding, which truncates the internal ranks of a tensor already in TT format. We propose several randomized algorithms for this task that are generalizations of randomized low-rank matrix approximation algorithms and provide significant reduction in computation compared to deterministic TT-rounding algorithms. Randomization is particularly effective in the case of rounding a sum of TT-tensors (where we observe 20\times speedup), which is the bottleneck computation in the adaptation of GMRES to vectors in TT format. We present the randomized algorithms and compare their empirical accuracy and computational time with deterministic alternatives.
It has been suggested recently that the uncertainty randomization approach may offer numerical advantages when applied to robust control problems. This paper investigates new possibilities which this approach may offe...
详细信息
It has been suggested recently that the uncertainty randomization approach may offer numerical advantages when applied to robust control problems. This paper investigates new possibilities which this approach may offer in relation to the robust stability and control of stochastic systems governed by uncertain discrete-state Markov processes.
The problem of minimum distance localization in environments that may contain self-similarities is addressed. A mobile robot is placed at an unknown location inside a 2D self-similar polygonal environment P. The robot...
详细信息
The problem of minimum distance localization in environments that may contain self-similarities is addressed. A mobile robot is placed at an unknown location inside a 2D self-similar polygonal environment P. The robot has a map of P and can compute visibility data through sensing. However the self-similarities in the environment mean that the same visibility data may correspond to several different locations. The goal, therefore, is to determine the robot's true initial location while minimizing the distance traveled by the robot. Two randomized approximation algorithms are presented that solve minimum distance localization. The performance of the proposed algorithms is evaluated empirically.
We show that randomization can lead to significant improvements for a few fundamental problems in distributed tracking. Our basis is the count-tracking problem, where there are k players, each holding a counter ni tha...
详细信息
We show that randomization can lead to significant improvements for a few fundamental problems in distributed tracking. Our basis is the count-tracking problem, where there are k players, each holding a counter ni that gets incremented over time, and the goal is to track an epsilon-approximation of their sum n=Sigma ini continuously at all times, using minimum communication. While the deterministic communication complexity of the problem is (k/epsilonlogN), where N is the final value of n when the tracking finishes, we show that with randomization, the communication cost can be reduced to epsilonlogN). Our algorithm is simple and uses only O(1) space at each player, while the lower bound holds even assuming each player has infinite computing power. Then, we extend our techniques to two related distributed tracking problems: frequency-tracking and rank-tracking, and obtain similar improvements over previous deterministic algorithms. Both problems are of central importance in large data monitoring and analysis, and have been extensively studied in the literature.
Distributed optimization in multi-agent systems under sparsity constraints has recently received a lot of attention. In this paper, we consider the in-network minimization of a continuously differentiable nonlinear fu...
详细信息
Distributed optimization in multi-agent systems under sparsity constraints has recently received a lot of attention. In this paper, we consider the in-network minimization of a continuously differentiable nonlinear function which is a combination of local agent objective functions subject to sparsity constraints on the variables. A crucial issue of in-network optimization is the handling of the communications, which may be expensive. This calls for efficient algorithms, that are able to reduce the number of required communication links and transmitted messages. To this end, we focus on asynchronous and randomized distributed techniques. Based on consensus techniques and iterative hard thresholding methods, we propose three methods that attempt to minimize the given function, promoting sparsity of the solution: asynchronous hard thresholding (AHT), broadcast hard thresholding (BHT), and gossip hard thresholding (GHT). Although similar in many aspects, it is difficult to obtain a unified analysis for the proposed algorithms. Specifically, we theoretically prove the convergence and characterize the limit points of AHT in regular networks under some proper assumptions on the functions to be minimized. For BHT and GHT, instead, we characterize the fixed points of the maps that rule their dynamics in terms of stationary points of original problem. Finally, we illustrate the implementation of our techniques in compressed sensing and present several numerical results on performance and number of transmissions required for convergence.
暂无评论