Bosonic quantum computing, based on the infinite-dimensional qumodes, has shown promise for various practical applications that are classically hard. However, the lack of compiler optimizations has hindered its full p...
详细信息
ISBN:
(数字)9798350326581
ISBN:
(纸本)9798350326598
Bosonic quantum computing, based on the infinite-dimensional qumodes, has shown promise for various practical applications that are classically hard. However, the lack of compiler optimizations has hindered its full potential. This paper introduces Bosehedral, an efficient compiler optimization framework for (Gaussian) Boson sampling on Bosonic quantum hardware. Bosehedral overcomes the challenge of handling infinite-dimensional qumode gate matrices by performing all its program analysis and optimizations at a higher algorithmic level, using a compact unitary matrix representation. It optimizes qumode gate decomposition and logical-to-physical qumode mapping, and introduces a tunable probabilistic gate dropout method. Overall, Bosehedral significantly improves the performance by accurately approximating the original program with much fewer gates. Our evaluation shows that Bosehedral can largely reduce the program size but still maintain a high approximation fidelity, which can translate to significant end-to-end application performance improvement.
One of the key challenges when designing a ten (10) hours educational short course, entitled “A Hands-on Approach for Implementing Stochastic Optimization algorithms from Scratch”, which was accepted for inclusion a...
详细信息
ISBN:
(数字)9789464593617
ISBN:
(纸本)9798331519773
One of the key challenges when designing a ten (10) hours educational short course, entitled “A Hands-on Approach for Implementing Stochastic Optimization algorithms from Scratch”, which was accepted for inclusion at ICASSP'23, was related to addressing “how to introduce the Stochastic Gradient Descent (SGD) algorithm and variants in a consistent, accessible fashion”? From a simplistic perspective, the SGD algorithm is nothing else than the classical gradient descent (GD) algorithm along with a (very) noisy gradient. Nonetheless, arguably, SGD's most influential variants, e.g. AdaGrad, RMSprop and Adam, nor more recent ones (LookAhead, E-Adam, MadGrad, among several others) may not be explained in such superficial terms. Moreover, such variants are usually given as as black-boxes by most deep-learning (DL) libraries (e.g. TensorFlow, PyTorch, etc.). In this article, based on the experience of the aforementioned short-course, I propose to link the SGD algorithm and variants via an “evolutionary path”, in which each SGD variant may be understood as a set of add-on features over the vanilla SGD, resulting in a generalized algorithm along with a “family tree” graph which are both intuitive and useful when implementing a given SGD variant.
This paper discusses the problem of estimating a stochastic signal from nonlinear uncertain observations with time-correlated additive noise described by a first-order Markov process. Random deception attacks are assu...
详细信息
Customer Customer churn, a critical metric for businesses reliant on customer loyalty, involves the loss of customers from a company. This study examines churn among college and university students, leading to financi...
详细信息
ISBN:
(数字)9798350355314
ISBN:
(纸本)9798350355321
Customer Customer churn, a critical metric for businesses reliant on customer loyalty, involves the loss of customers from a company. This study examines churn among college and university students, leading to financial losses, decreased accreditation, and diminished reputations for educational institutions, ultimately reducing their appeal to prospective students. To identify factors contributing to student churn and classify students as likely to churn or not, this research utilizes Decision Tree, Random Forest, and XGBoost algorithms. These classification algorithms, with XGBoost and Random Forest enhancing Decision Trees through boosting and bagging techniques, respectively, are well-suited for the task. The data was sourced from the Academic Information Bureau and processed before model development. The model with the best performance will predict student churn on new, unseen data. Findings indicate varying performance metrics for each model, including accuracy, precision, recall, and F1-score. The XGBoost model excelled, achieving an accuracy of 98%, precision of 99%, recall of 96%, and an F1-score of 97%, significantly improving over the base Decision Tree model across all metrics. Notably, all models showed better performance in predicting the majority class (not churn) compared to the minority class (churn).
Given an edge-colored graph, the goal of the proportional fair matching problem is to find a maximum weight matching while ensuring proportional representation (with respect to the number of edges) of each color. The ...
详细信息
In this work, we analyze a sublinear-time algorithm for selecting a few rows and columns of a matrix for low-rank approximation purposes. The algorithm is based on an initial uniformly random selection of rows and col...
详细信息
Given a weighted graph G, a minimum weight α-spanner is a least-weight subgraph H ⊆ G that preserves minimum distances between all node pairs up to a factor of α. There are many results on heuristics and approximati...
详细信息
Distributed training has emerged as a critical application in clusters due to the widespread adoption of AI technology across various domains. However, as distributed training continues to advance, it has become incre...
详细信息
ISBN:
(数字)9798350350128
ISBN:
(纸本)9798350350135
Distributed training has emerged as a critical application in clusters due to the widespread adoption of AI technology across various domains. However, as distributed training continues to advance, it has become increasingly time-consuming. To address this challenge, researchers have explored leveraging In-Network Aggregation (INA) to expedite distributed model training. Specifically, by harnessing programmable hardware, such as Intel Tofino switches, INA can aggregate gradients within the network, thereby reducing the amount of gradient transmission and accelerating distributed training. However, previous works assume fixed routing selection and batch size, ignoring their impact on model convergence and resulting in extended completion time. To bridge this gap, we propose InGo, a pioneering approach that considers both in-network aggregation routing and batch size adjustment, and provide the rigorous convergence analysis. Then, we formally define the problem of in-network aggregation routing with batch size adjustment, and present an efficient algorithm with bounded approximation factors to solve this problem. Through extensive experiments on both physical platforms and simulated environments, we demonstrate that InGo significantly reduces the completion time by 25.2%-74.7% compared to state-of-the-art solutions.
Dynamic compressed sensing (DCS) techniques have been applied to enhance the performance of channel estimation (CE) in underwater acoustic (UWA) communications. Existing DCS-CE schemes are mainly applicable to slow-va...
详细信息
ISBN:
(数字)9798350362077
ISBN:
(纸本)9798350362084
Dynamic compressed sensing (DCS) techniques have been applied to enhance the performance of channel estimation (CE) in underwater acoustic (UWA) communications. Existing DCS-CE schemes are mainly applicable to slow-varying channels and solutions working under rapid channel conditions are highly desirable. In this paper, tracking of time-varying sparse UWA channel is formulated as a
$\ell_{p}$
-norm regularized recursive least square (RLS) problem, which is then solved via a proximal gradient (PG) algorithm. The resulting CE scheme is named the dynamic PG (DPG) CE method. Experimental results verified the advantage of the proposed DPG CE scheme over existing sparse CE schemes.
In the paper we consider the discrete variant of the well-known Influence Maximization Problem (IMP). Given some influence model, it consists in finding a so-called seed set of influential users of fixed size, that ma...
详细信息
ISBN:
(数字)9798350382501
ISBN:
(纸本)9798350382518
In the paper we consider the discrete variant of the well-known Influence Maximization Problem (IMP). Given some influence model, it consists in finding a so-called seed set of influential users of fixed size, that maximizes the total spread of influence over the network. We limit our study to the influence model called Deterministic Linear Threshold Model (DLTM). It is well known that IMP under DLTM is computationally hard and there are no approximate algorithms for its solving with a constant approximation ratio if $P\neq NP$. Therefore, it makes sense to apply metaheuristic algorithms to this problem. In the present research we propose new algorithms for solving IMP under DLTM, which are based on a technique that combines evolutionary and genetic strategies for pseudo-Boolean optimization with a greedy algorithm which is used to find some initial approximation. We use the proposed strategy to solve another well-known combinatorial problem for networks called Target Set Selection (TSS). We propose to solve TSS as a sequence of IMPs with gradually decreasing of target set size. In the experimental part of the paper we demonstrate that our new strategy outperforms the previous ways to solve TSS, yielding smaller target sets of good quality.
暂无评论