The Kramers-Kronig (KK) receiver breaks through the limitation that a single photodetector (PD) can only process the intensity information, and has become a research hotspot in optical communication system. However, i...
详细信息
The Kramers-Kronig (KK) receiver breaks through the limitation that a single photodetector (PD) can only process the intensity information, and has become a research hotspot in optical communication system. However, in the conventional KK algorithm, the required nonlinear operations (such as logarithm (log) and exponential and square-root (sqrt) functions) significantly broadens the signal spectrum, it is necessary to employ the digital upsampling at the beginning of the digital signal processing (DSP). The high sampling rate will bring multiple hardware resources and power consumption, which becomes a major obstacle to implement KK receiver. In this paper, we propose a novel improved algorithm for the KK receiver, which does not need the digital upsampling and lowers the required carrier-to-signal power ratio (CSPR) with a comparison of the typical upsampling-free KK receiver. In the proposed improved algorithm, we adopt the mathematical approximations of Taylor's expansion and Newton's tangent method to avoid the use of log and sqrt functions. Then we removed the exponential function by expressing complex signal in Cartesian form (i.e., real plus imaginary). Whereas the real part of it is induced by employing imaginary part-induced signal-signal beat interference (SSBI) removal and sqrt operation to recover. Meanwhile, we use the simplified hybrid KK-SSBI cancellation (SHKK-SSBIC) technology to reduce the required CSPR by the system. We validate our proposed algorithm by transmitting 160 Gb/s single-sideband (SSB) signal. The experimental results show that compared with the upsampling-free KK receiver, the proposed scheme can realize the reduction of the required CSPR by 1 dB and 0.8 dB respectively under the Nyquist sampling rate at (back-to-back) BTB and 80 km transmission. Moreover, our proposed method achieves a 2 dB system sensitivity improvement in the BTB scenario. We also discuss the hardware implementation of the improved KK algorithm and its computationa
The analysis of the dynamic behavior of cells in time-lapse microscopy sequences requires the development of reliable and automatic tracking methods capable of estimating individual cell states and delineating the lin...
详细信息
The analysis of the dynamic behavior of cells in time-lapse microscopy sequences requires the development of reliable and automatic tracking methods capable of estimating individual cell states and delineating the lineage trees corresponding to the tracks. In this paper, we propose a novel approach, i.e., an ant colony inspired multi-Bernoulli filter, to handle the tracking of a collection of cells within which mitosis, morphological change and erratic dynamics occur. The proposed technique treats each ant colony as an independent one in an ant society, and the existence probability of an ant colony and its density distribution approximation are derived from the individual pheromone field and the corresponding heuristic information for the approximation to the multi-Bernoulli parameters. To effectively guide ant foraging between consecutive frames, a dual prediction mechanism is proposed for the ant colony and its pheromone field. The algorithm performance is tested on challenging datasets with varying population density, frequent cell mitosis and uneven motion over time, demonstrating that the algorithm outperforms recently reported approaches.
We study bottleneck labeled optimization problems arising in the context of graph theory. This long-established model partitions the set of edges into classes, each of which is identified by a unique color. The generi...
详细信息
We study bottleneck labeled optimization problems arising in the context of graph theory. This long-established model partitions the set of edges into classes, each of which is identified by a unique color. The generic objective is to construct a subgraph of prescribed structure (such as an s-t path, a spanning tree, or a perfect matching) while trying to minimize the maximum (or, alternatively, maximize the minimum) number of edges picked from any given color.
The linear least trimmed squares (LTS) estimator is a statistical technique for fitting a linear model to a set of points. Given a set of n points in a"e (d) and given an integer trimming parameter ha parts per t...
详细信息
The linear least trimmed squares (LTS) estimator is a statistical technique for fitting a linear model to a set of points. Given a set of n points in a"e (d) and given an integer trimming parameter ha parts per thousand currency signn, LTS involves computing the (d-1)-dimensional hyperplane that minimizes the sum of the smallest h squared residuals. LTS is a robust estimator with a 50 %-breakdown point, which means that the estimator is insensitive to corruption due to outliers, provided that the outliers constitute less than 50 % of the set. LTS is closely related to the well known LMS estimator, in which the objective is to minimize the median squared residual, and LTA, in which the objective is to minimize the sum of the smallest 50 % absolute residuals. LTS has the advantage of being statistically more efficient than LMS. Unfortunately, the computational complexity of LTS is less understood than LMS. In this paper we present new algorithms, both exact and approximate, for computing the LTS estimator. We also present hardness results for exact and approximate LTS. A number of our results apply to the LTA estimator as well.
Modern applications of FinTech are challenged by enormous volumes of financial data. One way to handle these is to adopt a streaming setting where data are only available to the algorithms during a very short time. Wh...
详细信息
Modern applications of FinTech are challenged by enormous volumes of financial data. One way to handle these is to adopt a streaming setting where data are only available to the algorithms during a very short time. When a new data point (financial transaction) is generated, it needs to be processed directly, and be forgotten immediately after. Especially, ongoing globalization efforts in FinTech require modern methods of fault detection to be able to work efficiently through more than 10 000 financial transactions per second if they are to be deployed as a first line of defence. This article investigates two algorithms able to perform well in this demanding setting: $K$K-means and FADO. Especially, this article provides supports for the claim that "the use of multiple clusters does not necessarily translate into increased detection performance." To support this claim, results are reported when operating in a quasi-realistic case study of Anti Money Laundering (AML) detection in real-time payment systems. We focus on two prototypical algorithms: the passive aggressive FADO assuming a single cluster, and the well-known $K$K-means algorithm working with $K>1$K>1 clusters. We find-in this case-that the use of $K$K-means with multiple clusters is unfavorable as 1) both tuning for $K$K, as well as the need for additional complexity in the $K$K-means algorithm challenges the computational constraints;2) $K$K-means introduces necessarily added variability (unreliability) in the results;3) it requires dimensionality reduction, compromising interpretability of the detections;4) the prevalence of singleton clusters adds unreliability to the outcome. This makes in the presented case FADO favorable over K-means (with $K>1$K>1).
In this paper we study the deployment of an Unmanned Aerial Vehicle (UAV) network that consists of multiple UAVs to provide emergent communication service for people who are trapped in a disaster area, where each UAV ...
详细信息
In this paper we study the deployment of an Unmanned Aerial Vehicle (UAV) network that consists of multiple UAVs to provide emergent communication service for people who are trapped in a disaster area, where each UAV is equipped with a base station that has limited computing capacity and power supply, and thus can only serve a limited number of people. Unlike most existing studies that focused on homogeneous UAVs, we consider the deployment of heterogeneous UAVs where different UAVs have different computing capacities. We study a problem of deploying K heterogeneous UAVs in the air to form a temporarily connected UAV network such that the network throughput - the number of users served by the UAVs, is maximized, subject to the constraint that the number of people served by each UAV is no greater than its service capacity. We then propose a novelO(root s/K) -approximation algorithm for the problem, where s is a given positive integer with 1 <= s <= K, e.g., s = 3. We also devise an improved heuristic, based on the approximation algorithm. We finally evaluate the performance of the proposed algorithms. Experimental results show that the numbers of users served by UAVs in the solutions delivered by the proposed algorithms are increased by 25% than state-of-the-arts.
In this paper, we provide the first provable linear-time (in terms of the number of nonzero entries of the input) algorithm for approximately solving the generalized trust region subproblem (GTRS) of minimizing a quad...
详细信息
In this paper, we provide the first provable linear-time (in terms of the number of nonzero entries of the input) algorithm for approximately solving the generalized trust region subproblem (GTRS) of minimizing a quadratic function over a quadratic constraint under some regularity condition. Our algorithm is motivated by and extends a recent linear-time algorithm for the trust region subproblem by Hazan and Koren [Math. Program., 158 (2016), pp. 363-381]. However, due to the nonconvexity and noncompactness of the feasible region, such an extension is nontrivial. Our main contribution is to demonstrate that under some regularity condition, the optimal solution is in a compact and convex set and lower and upper bounds of the optimal value can be computed in linear time. Using these properties, we develop a linear-time algorithm for the GTRS.
In this study, macroscopic properties of the vector approximate message passing (VAMP) algorithm for inference of generalized linear models are investigated using a non-rigorous heuristic method of statistical mechani...
详细信息
In this study, macroscopic properties of the vector approximate message passing (VAMP) algorithm for inference of generalized linear models are investigated using a non-rigorous heuristic method of statistical mechanics when the true posterior cannot be used and the measurement matrix is a sample from rotation-invariant random matrix ensembles. The focus is on the correspondence between the non-rigorous replica analysis of statistical mechanics and the performance assessment of VAMP in the model-mismatched setting. The correspondence of this kind is well-known when the measurement matrix has independent and identically distributed entries. However, when the measurement matrix follows a general rotation-invariant matrix ensemble, the correspondence has been validated only under limited cases, such as the Bayes optimal inference or the convex empirical risk minimization. The result presented in this paper is to extend the scope of such correspondence. Herein, we heuristically derive the explicit formula of state-evolution equations, which macroscopically describe VAMP dynamics for the current model-mismatched case, and show that their fixed point is generally consistent with the replica symmetric solution obtained by the replica method of statistical mechanics. We also show that the fixed point of VAMP can exhibit a microscopic instability, which indicates that message variables continue to move by VAMP while their macroscopically summarized quantities converge to fixed values. The critical condition the for microscopic instability agrees with that for breaking the replica symmetry that is derived within the non-rigorous replica analysis. The results of the numerical experiments cross-check our findings.
Given a source node s and a target node t, the hitting probability tells us how likely an alpha-terminating random walk (which stops with probability $\alpha$alpha at each step) starting from s can hit t before it sto...
详细信息
Given a source node s and a target node t, the hitting probability tells us how likely an alpha-terminating random walk (which stops with probability $\alpha$alpha at each step) starting from s can hit t before it stops. This concept originates from the hitting time, a classic concept in random walks. In this paper, we focus on the group hitting probability (GHP) where the target is a set of nodes, measuring the node-to-group structural proximity. For this group version of the hitting probability, we present efficient algorithms for two types of GHP queries: the pairwise query which returns the GHP value of a target set T with respect to (w.r.t.) a source node s, and the top-k query which returns the top-k target sets with the largest GHP value w.r.t. a source node s. We first develop an efficient algorithm named SAMBA for the pairwise query, which is built on a group local push algorithm tailored for GHP, with rigorous analysis for correctness. Next, we show how to speed up SAMBA by combining the group local push algorithm with the Monte Carlo approach, where GHP brings new challenges as it might need to consider every hop of the random walk. We tackle this issue with a new formulation of the GHP and show how to provide approximation guarantees with a detailed theoretical analysis. With SAMBA as the backbone, we develop an iterative algorithm for top-k queries, which adaptively refines the bounds for the candidate target sets, and terminates as soon as it meets the stopping condition, thus saving unnecessary computational costs. We further present an optimization technique to accelerate the top-k query, improving its practical performance. Extensive experiments show that our solutions are orders of magnitude faster than their competitors.
We present an improved semidefinite programming based approximation algorithm for the MAX CUT problem in graphs of maximum degree at most 3. The approximation ratio of the new algorithm is at least 0.9326. This improv...
详细信息
We present an improved semidefinite programming based approximation algorithm for the MAX CUT problem in graphs of maximum degree at most 3. The approximation ratio of the new algorithm is at least 0.9326. This improves, and also somewhat simplifies, a result of Feige, Karpinski and Langberg. We also observe that results of Hopkins and Staton and of Bondy and Locke yield a simple combinatorial 4/5-approximation algorithm for the problem. Finally, we present a combinatorial 22/27-approximation algorithm for the MAX CUT problem for regular cubic graphs. (C) 2004 Elsevier Inc. All rights reserved.
暂无评论