Inspired by the definition of tensor-tensor product and tensor tubal rank, a randomized singular value decomposition of tensor is presented in this paper. Based on tensor singular value decomposition (t-SVD) and tenso...
详细信息
Inspired by the definition of tensor-tensor product and tensor tubal rank, a randomized singular value decomposition of tensor is presented in this paper. Based on tensor singular value decomposition (t-SVD) and tensor randomized singular value decomposition (t-RSVD), we obtain two efficient algorithms to solve tensor completion problem. We also propose the adaptive rank method to adjust the tubal rank of tensor. The main advantage of the random projection-based t-RSVD is to cut down the computing time in consideration of large-scale problems. In the optimization process, the alternating minimization algorithm is employed to solve the tensor completion problem. Finally, numerical experiment results indicate that the t-RSVD is competitive and consumes less time than the t-SVD. The efficiency and feasibility of our methods are illustrated by the image and video recovery.
Online optimization problems are well understood in the convex case, where algorithmic performance is typically measured relative to the best fixed decision. In this article, we shed light on online nonconvex optimiza...
详细信息
Online optimization problems are well understood in the convex case, where algorithmic performance is typically measured relative to the best fixed decision. In this article, we shed light on online nonconvex optimization problems in which algorithms are evaluated against the optimal decision at each time using the more useful notion of dynamic regret. The focus is on loss functions that are arbitrarily nonconvex but have global solutions that are slowly time-varying. We address this problem by first analyzing the region around the global solution at each time to define time-varying target sets, which contain the global solution and exhibit desirable properties under the projected gradient descent algorithm. All points in a target set satisfy the proximal Polyak-Lojasiewicz inequality, among other conditions. Then, we introduce two algorithms and prove that the dynamic regret for each algorithm is bounded by a function of the temporal variation in the optimal decision. The first algorithm assumes that the decision maker has some prior knowledge about the initial objective function and may query the gradient repeatedly at each time. This algorithm ensures that decisions are within the target set at every time. The second algorithm makes no assumption about prior knowledge. It instead relies on random sampling and memory to find and then track the target sets over time. In this case, the landscape of the loss functions determines the likelihood that the dynamic regret will be small. Numerical experiments validate these theoretical results and highlight the impact of a single low-complexity problem early in the sequence.
This letter investigates the failure diagnosability of stochastic discrete-event systems (DES). Specifically, the A-Diagnosability (proposed by Thorsley et al., 2005) is studied, which requires every failure to be sto...
详细信息
This letter investigates the failure diagnosability of stochastic discrete-event systems (DES). Specifically, the A-Diagnosability (proposed by Thorsley et al., 2005) is studied, which requires every failure to be stochastically diagnosable with arbitrary probability and within a certain delay bound. The verification of A-Diagnosability was later shown to be PSPACE-Complete, and a polynomial testing algorithm likely does not exist. This letter fills this gap by providing a new necessary and sufficient condition for checking A-Diagnosability of stochastic DES, based on which a probabilistic test is also proposed. The complexity of the proposed algorithm is polynomial in the number of system states and events, with a sacrifice that the proposed test will also incur certain test errors. Furthermore, the balance between computing complexity and probability of test error is calibratable through a hyper-parameter. Several working examples are provided to illustrate the proposed verification condition and probabilistic test.
The Leslie matrix plays a crucial role in analyzing the changes in survival and birth rates, thus determining the population's evolution. However, when it comes to evaluating continuous solutions of population gro...
详细信息
The Leslie matrix plays a crucial role in analyzing the changes in survival and birth rates, thus determining the population's evolution. However, when it comes to evaluating continuous solutions of population growth models, discrete solutions are necessary. The Leslie matrix model, being a discrete model based on the age stage of the population, becomes particularly relevant in this context. In this work, we present a computational mathematical tool designed be used in the Leslie matrix model, which consists of reconstructing a Leslie matrix of any order from a given set of real numbers.
A Maximal Independent Set (MIS) is an inclusion maximal set of pairwise non-adjacent vertices. The computation of an MIS is one of the core problems in distributed computing. In this article, we introduce and analyze ...
详细信息
A Maximal Independent Set (MIS) is an inclusion maximal set of pairwise non-adjacent vertices. The computation of an MIS is one of the core problems in distributed computing. In this article, we introduce and analyze a finite-state distributed randomized algorithm for computing a Maximal Independent Set (MIS) on arbitrary undirected graphs. Our algorithm is self-stabilizing (reaches a correct output on any initial configuration) and can be implemented on systems with very scarce conditions. We analyze the convergence time of the proposed algorithm, showing that in many cases the algorithm converges in logarithmic time with high probability.
An easy-to-implement iterative algorithm that enables efficient and scalable spectral analysis of dense matrices is presented. The algorithm relies on the approximation of a matrix's singular values by those of a ...
详细信息
An easy-to-implement iterative algorithm that enables efficient and scalable spectral analysis of dense matrices is presented. The algorithm relies on the approximation of a matrix's singular values by those of a series of smaller matrices formed from uniform random sampling of its rows and columns. It is shown that, for sufficiently incoherent and rank-deficient matrices, the singular values [are expected to] decay at the same rate as those of matrices formed via this sampling scheme, which permits such matrices' ranks to be accurately estimated from the smaller matrices' spec-tra. Moreover, for such a matrix of size m x n, it is shown that the dominant singular values are [expected to be] root mn/k times those of a k x k matrix formed by randomly sampling k of its rows and columns. Starting from a small initial guess k = k(0), the algorithm repeatedly doubles k until two convergence criteria are met;the criteria to ensure that k is sufficiently large to estimate the singular values, to the desired accuracy, are presented. The algorithm's properties are analyzed theoretically and its efficacy is studied numerically for small to very-large matrices that result from discretization of integral-equation operators, with various physical kernels common in electromagnetics and acoustics, as well as for artificial matrices of various incoherence and rank-deficiency properties.
Approximate model counting is the task of approximating the number of solutions to an input Boolean formula. The state-of-the-art approximate model counter for formulas in conjunctive normal form (CNF), ApproxMC, prov...
详细信息
ISBN:
(纸本)9783031656262;9783031656279
Approximate model counting is the task of approximating the number of solutions to an input Boolean formula. The state-of-the-art approximate model counter for formulas in conjunctive normal form (CNF), ApproxMC, provides a scalable means of obtaining model counts with probably approximately correct (PAC)-style guarantees. Nevertheless, the validity of ApproxMC's approximation relies on a careful theoretical analysis of its randomized algorithm and the correctness of its highly optimized implementation, especially the latter's stateful interactions with an incremental CNF satisfiability solver capable of natively handling parity (XOR) constraints. We present the first certification framework for approximate model counting with formally verified guarantees on the quality of its output approximation. Our approach combines: (i) a static, once-off, formal proof of the algorithm's PAC guarantee in the Isabelle/HOL proof assistant;and (ii) dynamic, per-run, verification of ApproxMC's calls to an external CNF-XOR solver using proof certificates. We detail our general approach to establish a rigorous connection between these two parts of the verification, including our blueprint for turning the formalized, randomized algorithm into a verified proof checker, and our design of proof certificates for both ApproxMC and its internal CNF-XOR solving steps. Experimentally, we show that certificate generation adds little overhead to an approximate counter implementation, and that our certificate checker is able to fully certify 84.7% of instances with generated certificates when given the same time and memory limits as the counter.
We study an online caching problem in which requests can be served by a local cache to avoid retrieval costs from a remote server. The cache can update its state after a batch of requests and store an arbitrarily smal...
详细信息
We study an online caching problem in which requests can be served by a local cache to avoid retrieval costs from a remote server. The cache can update its state after a batch of requests and store an arbitrarily small fraction of each file. We study no-regret algorithms based on Online Mirror Descent (OMD) strategies. We show that bounds for the regret crucially depend on the diversity of the request process, provided by the diversity ratio R/h, where R is the size of the batch and h is the maximum multiplicity of a request in a given batch. We characterize the optimality of OMD caching policies w.r.t. regret under different diversity regimes. We also prove that, when the cache must store the entire file, rather than a fraction, OMD strategies can be coupled with a randomized rounding scheme that preserves regret guarantees, even when update costs cannot be neglected. We provide a formal characterization of the rounding problem through optimal transport theory, and moreover we propose a computationally efficient randomized rounding scheme.
In this paper we describe a randomized algorithm which returns a maximal spanning forest of an unknown weighted undirected graph making O(n) CUT queries in expectation. For weighted graphs, this is optimal due to a re...
详细信息
In this paper we describe a randomized algorithm which returns a maximal spanning forest of an unknown weighted undirected graph making O(n) CUT queries in expectation. For weighted graphs, this is optimal due to a result in [Auza and Lee, 2021] which shows an O(n) lower bound for zeroerror randomized algorithms. These questions have been extensively studied in the past few years, especially due to the problem's connections to symmetric submodular function minimization. We also describe a simple polynomial time deterministic algorithm that makes O( n log n/log log n) queries on undirected unweighted graphs and returns a maximal spanning forest, thereby (slightly) improving upon the state-of-the-art.
In the Memory Reallocation Problem a set of items of various sizes must be dynamically assigned to non-overlapping contiguous chunks of memory. It is guaranteed that the sum of the sizes of all items present at any ti...
详细信息
ISBN:
(纸本)9798400704161
In the Memory Reallocation Problem a set of items of various sizes must be dynamically assigned to non-overlapping contiguous chunks of memory. It is guaranteed that the sum of the sizes of all items present at any time is at most a (1 - epsilon)-fraction of the total size of memory (i.e., the load-factor is at most 1 - epsilon). The allocator receives insert and delete requests online, and can re-arrange existing items to handle the requests, but at a reallocation cost defined to be the sum of the sizes of items moved divided by the size of the item being inserted/deleted. The folklore algorithm for Memory Reallocation achieves a cost of O(epsilon(-1)) per update. In recent work at FOCS'23, Kuszmaul showed that, in the special case where each item is promised to be smaller than an epsilon(4)-fraction of memory, it is possible to achieve expected update cost O(log epsilon(-1)). Kuszmaul conjectures, however, that for larger items the folklore algorithm is optimal. In this work we disprove Kuszmaul's conjecture, giving an allocator that achieves expected update cost O(epsilon(-1/2) polylog epsilon(-1)) on any input sequence. We also give the first non-trivial lower bound for the Memory Reallocation Problem: we demonstrate an input sequence on which any resizable allocator (even offline) must incur amortized update cost at least O(log epsilon(-1)). Finally, we analyze the Memory Reallocation Problem on a stochastic sequence of inserts and deletes, with random sizes in [delta, 2 delta] for some delta. We show that, in this simplified setting, it is possible to achieve O(log epsilon(-1)) expected update cost, even in the "large-item" parameter regime (delta > epsilon(4)).
暂无评论