The optimal value of the objective function of the d-dimensional earth mover's problem can be viewed as a real-valued functional phi that is defined on normalized and finite multisets X subset of R-+(n). We show t...
详细信息
The optimal value of the objective function of the d-dimensional earth mover's problem can be viewed as a real-valued functional phi that is defined on normalized and finite multisets X subset of R-+(n). We show that phi possesses a variety of useful properties: it is homogeneous, translation invariant, it has a monotonicity property of the form conv(X) subset of conv(Y) implies phi(x) <= phi(Y) and phi is Minkowski additive, i.e., phi(X + Y) = phi(X) + phi(Y). We also show that for admissible X, solutions to the primal and dual linear programs that define phi may be generated simultaneously using a single-phase greedy algorithm and that solutions to the dual reflect the geometry of X. (C) 2019 Elsevier B.V. All rights reserved.
In this paper, we employ the sparsity-constrained least squares method to reconstruct sparse signals from the noisy measurements in high-dimensional case, and derive the existence of the optimal solution under certain...
详细信息
In this paper, we employ the sparsity-constrained least squares method to reconstruct sparse signals from the noisy measurements in high-dimensional case, and derive the existence of the optimal solution under certain conditions. We propose an inexact sparse-projected gradient method for numerical computation and discuss its convergence. Moreover, we present numerical results to demonstrate the efficiency of the proposed method.
In this paper, we study wireless power transfer (WPT) using the discrete lens array-based beamspace large-scale multiple-input multiple-output (MIMO) system. The channel matrix of beamspace MIMO exhibits sparse proper...
详细信息
In this paper, we study wireless power transfer (WPT) using the discrete lens array-based beamspace large-scale multiple-input multiple-output (MIMO) system. The channel matrix of beamspace MIMO exhibits sparse property, which enables the transmitter to employ only a small number of active antennas while maintaining the full MIMO performance. Hence, the number of radio frequency (RF) chains can be significantly reduced, cutting the cost of hardware implementation and circuit power consumption. We consider two WPT design problems in the beamspace MIMO system with constraints on the number of RF chains: the sum power transfer and the max-min power transfer problems;and for each problem, we consider both multi-stream and uni-stream transmissions. For the sum power transfer problem, we show that the uni-stream power transfer achieves the same performance as the multi-stream case, and we propose two algorithms for the uni-stream transmission, namely, an eigendecomposition-based greedy algorithm and a truncated power iteration algorithm. For the max-min power transfer problem, we propose a semidefinite relaxation-based greedy algorithm for the multi-stream power transfer and a Riemannian conjugate gradient algorithm for the uni-stream case. The simulation results show that with a small number of RF chains, the beamspace MIMO system significantly outperforms the conventional MIMO system in terms of WPT efficiency.
The mobile edge computing (MEC) technology can provide mobile users (MU) with high reliability and low time-delay computing and communication services. The imbalanced edge cloud deployment can better adapt to the non-...
详细信息
The mobile edge computing (MEC) technology can provide mobile users (MU) with high reliability and low time-delay computing and communication services. The imbalanced edge cloud deployment can better adapt to the non-uniform spatial-time distribution of tasks and reduce the deployment cost of edge cloud servers. For multi-user and multi-task offloading decision based on the imbalanced edge cloud, a new offloading cost criteria, based on the tradeoff among time delay-energy consumption-cost, is designed to quantify the user experience of task offloading and to be the optimization target of offloading decision. Both the optimization problems of minimizing the sum offloading costs for all MUs (efficiency-based) and minimizing the maximal offloading cost per MU (fairness-based) are discussed. Efficiency-based offloading decision algorithm [centralized greedy algorithm (CGA) and modified greedy algorithm (MGA)] and fairness-based offloading decision algorithm [fairness-based greedy algorithm (FGA)] are proposed, respectively, and the performance bounds of the algorithm are analyzed. The simulation results show that the offloading cost of the MGA is lower than the CGA, the efficiency of resource utilization of the CGA is higher than that of the FGA, and the fairness of the FGA is stronger than that of the CGA.
We consider the leader selection problem in a network with consensus dynamics where both leader and follower agents are subject to stochastic external disturbances. The performance of the system is quantified by the t...
详细信息
We consider the leader selection problem in a network with consensus dynamics where both leader and follower agents are subject to stochastic external disturbances. The performance of the system is quantified by the total steady-state variance of the node states, and the goal is to identify the set of leaders that minimizes this variance. We first show that this performance measure can be expressed as a submodular set function over the nodes in the network. We then use this result to analyze the performance of two greedy, polynomial-time algorithms for leader selection, showing that the leader sets produced by the greedy algorithms are within provable bounds of optimal.
Doppler ultrasonography (DUS) is widely used in medical diagnosis due to its low-cost, non-invasive nature, and real-time operation. Its applications have further expanded with the emergence of point-of-care and weara...
详细信息
Doppler ultrasonography (DUS) is widely used in medical diagnosis due to its low-cost, non-invasive nature, and real-time operation. Its applications have further expanded with the emergence of point-of-care and wearable devices, the demand for which is rapidly increasing. However, current DUS abnormality detection methods are too computationally intensive for such resource-constrained platforms. This brief presents a low-complexity real-time abnormality detection scheme that enables development of wearable DUS devices. It uses an approximated Fourier transform and a novel greedy algorithm to detect spectrogram envelopes on-the-fly from the stream of samples, thus significantly reducing power and area requirements while achieving a detection accuracy of 96% on a mixture of 25 normal and abnormal test cases. A real-time ASIC implementation of the scheme in 180-nm CMOS consumes 16.8 mu W at a clock frequency of 80 kHz while occupying a layout area of 0.64 mm(2).
We study a general node discoverability optimization problem on networks, where the goal is to create a few edges to a target node so that the target node can be easily discovered by the other nodes in the network. Fo...
详细信息
We study a general node discoverability optimization problem on networks, where the goal is to create a few edges to a target node so that the target node can be easily discovered by the other nodes in the network. For instance, a jobseeker may want to connect with some members in Linkedln so that recruiters can easily find him. We first propose two definitions of node discoverability. Then, we prove that the node discoverability optimization problem is NP-hard. We show that a greedy algorithm can be used to find near optimal solutions. To scale up the algorithm on large networks, we design three methods: (1) an exact method based on dynamic programming, which is accurate but computationally inefficient;(2) an estimation method based on the framework of random walk, which is efficient but may be inaccurate;(3) an estimation-and-refinement method, which combines the previous two methods and we show that it is both accurate and efficient. Experiments conducted on real networks demonstrate that the estimation-and-refinement method can provide a good trade-off between solution accuracy and computational efficiency, and achieve speedup of up to three orders of magnitude over the exact method. (C) 2018 Elsevier Inc. All rights reserved.
In order to improve the imaging quality of ghost imaging and solve the problem of high distortion at a low sampling rate, the compressive sensing ghost imaging based on image gradient (IGGI) is proposed. The image gra...
详细信息
In order to improve the imaging quality of ghost imaging and solve the problem of high distortion at a low sampling rate, the compressive sensing ghost imaging based on image gradient (IGGI) is proposed. The image gradient can reflect the changes of optical characteristics and carry the edge information of object. In this paper, the principle of compressive sensing ghost imaging is analyzed. And the total variation, the integral of image gradient, is used to optimize the reconstruction process. Simultaneously, the threshold of matching degree is set up to reduce computation load and improve imaging speed. The results of simulation and experiments show that compared with traditional ghost imaging, the IGGI can achieve high-quality images and obtain the edge information of targets at a low sampling rate, which further facilitate the practical application of ghost imaging.
Blind gain and phase calibration (BGPC) is a bilinear inverse problem involving the determination of unknown gains and phases of the sensing system, and the unknown signal, jointly. BGPC arises in numerous application...
详细信息
Blind gain and phase calibration (BGPC) is a bilinear inverse problem involving the determination of unknown gains and phases of the sensing system, and the unknown signal, jointly. BGPC arises in numerous applications, e.g., blind albedo estimation in inverse rendering, synthetic aperture radar autofocus, and sensor array auto-calibration. In some cases, sparse structure in the unknown signal alleviates the illposedness of BGPC. Recently, there has been renewed interest in solutions to BGPC with careful analysis of error bounds. In this paper, we formulate BGPC as an eigenvalue/eigenvector problem and propose to solve it via power iteration, or in the sparsity or joint sparsity case, via truncated power iteration. Under certain assumptions, the unknown gains, phases, and the unknown signal can be recovered simultaneously. Numerical experiments show that power iteration algorithms work not only in the regime predicted by our main results, but also in regimes where theoretical analysis is limited. We also show that our power iteration algorithms for BGPC compare favorably with competing algorithms in adversarial conditions, e.g., with noisy measurement or with a bad initial estimate.
The algorithm of influence maximization aims at detecting the top-k influential users (seed set) in the network, which has been proved that finding an optimal solution is NP hard. To address this challenge, finding th...
详细信息
The algorithm of influence maximization aims at detecting the top-k influential users (seed set) in the network, which has been proved that finding an optimal solution is NP hard. To address this challenge, finding the trade-off between the effectiveness and efficiency may be a more realistic approach. How to accurately calculate the influence probability is a fundamental and open problem in influence maximization. The existing researches mainly adopted the pair-wise parameters to denote the influence spread probability. These approaches suffer severe over-representing and overfitting problems, and thus perform poorly for the influence maximization problem. In this paper, we calculate the influence probability by learning low-dimensional vectors (i.e., influence vector and susceptibility vector) based on the crowdsensing data in the information diffusion network. With much fewer parameters and opposed to the pair-wise manner, our approach can overcome the overfitting problem, and provide a foundation for solving the problem effectively. Moreover, we propose the Diffusion Discount algorithm based on the novel method of influence probability calculation and heuristic pruning approach, which can achieve high time efficiency. The experimental results show that our algorithm outperforms other five typical algorithms over the real-world datasets, and can be more practical in large-scale data sets.
暂无评论