Fusing mutual distance information with fingerprints can substantially improve indoor localization accuracy. Such distance information may be spatial (e.g., measurement among users or from installed beaconing devices)...
详细信息
Fusing mutual distance information with fingerprints can substantially improve indoor localization accuracy. Such distance information may be spatial (e.g., measurement among users or from installed beaconing devices) or temporal (e.g., via dead-reckoning). Previous approaches on distance-fusion often require deterministic distance measurement, consider fingerprints and distances separately, or are narrowly applicable to some specific sensing technology or scenario. Given the fact that fingerprint and distance measurements are intrinsically random, we propose Maxlifd, an accurate indoor localization framework fusing fingerprints and distances of arbitrary distributions via joint maximum likelihood. Maxlifd is a generic statistical/probabilistic framework applicable to a wide range of sensors (peer-assisted, INS, iBeacon, etc.) and fingerprints (Wi-Fi, RFID, etc.). It achieves low localization errors by a novel optimization formulation jointly considering mutual distances and fingerprint signals. Using generic probabilistic formulation, we further derive the lower bound on localization error for comprehensive performance analysis. We have implemented Maxlifd, and conducted extensive simulation and experimental trials in an international airport and our university campus. Our results show that Maxlifd achieves significantly lower errors than other state-of-the-art schemes (often by more than 30 percent). We experimentally verify that its performance does not depend sensitively on the exact knowledge of the underlying distributions beyond simple Gaussian.
Our objective is to train SVM based Localized Multiple Kernel Learning with arbitrary l(p)-norm constraint using the alternating optimization between the standard SVM solvers with the localized combination of base ker...
详细信息
Our objective is to train SVM based Localized Multiple Kernel Learning with arbitrary l(p)-norm constraint using the alternating optimization between the standard SVM solvers with the localized combination of base kernels and associated sample-specific kernel weights. Unfortunately, the latter forms a difficult l(p)-norm constraint quadratic optimization. In this letter, by approximating the l(p)-norm using Taylor expansion, the problem of updating the localized kernel weights is reformulated as a non-convex quadratically constraint quadratic programming, and then solved via associated convex semi-definite programming relaxation. Experiments on ten benchmark machine learning datasets demonstrate the advantages of our approach.
In this paper, the design of distributed broadband beamforming system is studied. In the configuration, we assume that each microphone is equipped with wireless communications capability. Once their mutual distance in...
详细信息
In this paper, the design of distributed broadband beamforming system is studied. In the configuration, we assume that each microphone is equipped with wireless communications capability. Once their mutual distance information is collected, localization techniques can be used to estimate the microphone locations. A broadband beamformer can then be designed such that the error between the actual response and the desired response is minimized. However, due to variations in the estimated microphone locations, robust design with uncertainties must be considered. This problem is formulated as a minimax optimization problem, which is then transformed into a semi-definite programming problem so that interior point algorithms can be applied. We illustrate the proposed method by several designs and show that the algorithm is robust and efficient.
We propose a behavioural portfolio selection model called collective mental accounting (CMA), which integrates all mental sub-portfolios (mental accounts) in one mathematical model. Moreover, this study contributes to...
详细信息
We propose a behavioural portfolio selection model called collective mental accounting (CMA), which integrates all mental sub-portfolios (mental accounts) in one mathematical model. Moreover, this study contributes to the literature of behavioural portfolio selection in three further ways: first, the CMA model can determine the proportions of wealth allocated to each mental sub-portfolio with and without input from the investor. Second, unlike other mental accounting models (MA), in CMA it is possible to define constraints on total asset holdings such as short-selling, and cardinality constraints. Third, in order to make CMA more tractable and mathematically elegant, we obtain a semi-definite programming representation of the model. We also present a numerical example to investigate the effects of short-selling constraints as well as to compare the portfolio recommendations, utility functions, feasibility, and optimality of the CMA and MA models. The results reveal that although both models' solutions are mean-variance efficient, CMA outperforms MA in terms of behavioural efficient frontier and utility functions.
Constant-modulus signals such as m-sequences are known to have good autocorrelation properties, as well as good peak-to-average power ratio that allows for full utilization of the transmitter's power. However, the...
详细信息
Constant-modulus signals such as m-sequences are known to have good autocorrelation properties, as well as good peak-to-average power ratio that allows for full utilization of the transmitter's power. However, the broadband ambiguity surface for such signals exhibit high sidelobe levels that are undesirable in applications where the signal is subject to broadband Doppler. We formulate an optimization problem to minimize the maximum sidelobe levels of such signals over a set of delay-Doppler values. This problem is non-convex and difficult to solve. We explore a convex regularization of the problem that can readily be solved using semi-definite programming, and show that optimal or near-optimal signals can be designed using this method. We further explore some heuristic methods to reduce computational and memory complexity of the solution, to enable us to design longer signals. We demonstrate the advantage of our signal design over conventional unimodular signals for target detection in strong clutter in a continuous active sonar application. (C) 2019 Elsevier B.V. All rights reserved.
This paper introduces a simple data-driven quadratic stabilization control (DDQSC) method to design a state feedback controller based solely on experimental measurements while avoiding explicitly identifying the plant...
详细信息
This paper introduces a simple data-driven quadratic stabilization control (DDQSC) method to design a state feedback controller based solely on experimental measurements while avoiding explicitly identifying the plant. Rather, we seek a controller guaranteed to quadratically stabilize all plants that could have possibly generated the observed data. While in principle this leads to a very challenging non-convex robust optimization problem, our main result provides a convex, albeit infinite-dimensional, necessary and sufficient condition for the existence of such a controller and its associated Lyapunov function. In the second part of the paper, we provide a tractable finite-dimensional convex relaxation of this condition and illustrate its effectiveness with several examples.
The accuracy and complexity of kernel learning algorithms is determined by the set of kernels over which it is able to optimize. An ideal set of kernels should: admit a linear parameterization (tractability); be dense...
详细信息
The accuracy and complexity of kernel learning algorithms is determined by the set of kernels over which it is able to optimize. An ideal set of kernels should: admit a linear parameterization (tractability); be dense in the set of all kernels (accuracy); and every member should be universal so that the hypothesis space is infinite-dimensional (scalability). Currently, there is no class of kernel that meets all three criteria - e.g. Gaussians are not tractable or accurate; polynomials are not scalable. We propose a new class that meet all three criteria - the Tessellated Kernel (TK) class. Specifically, the TK class: admits a linear parameterization using positive matrices; is dense in all kernels; and every element in the class is universal. This implies that the use of TK kernels for learning the kernel can obviate the need for selecting candidate kernels in algorithms such as SimpleMKL and parameters such as the bandwidth. Numerical testing on soft margin Support Vector Machine (SVM) problems show that algorithms using TK kernels outperform other kernel learning algorithms and neural networks. Furthermore, our results show that when the ratio of the number of training data to features is high, the improvement of TK over MKL increases significantly.
semi-definite programming (SDP) has been widely used for geolocation based on time-delay data. It offers lower computational costs at the expense of slight accuracy decreases. In this work, we consider the case of Dop...
详细信息
ISBN:
(纸本)9781467310680
semi-definite programming (SDP) has been widely used for geolocation based on time-delay data. It offers lower computational costs at the expense of slight accuracy decreases. In this work, we consider the case of Doppler data, that is overlooked in existing works, since convex relaxation for Doppler data seems less obvious than for time-delay data. We fill this gap and provide SDP solutions for Doppler-based geolocation. We also show that geolocation based on both Doppler and time-delay data requires the same relaxation as geolocation based on time-delay data only or Doppler data only.
This paper proposes an estimation method of the innovations model in closed loop environment by using the estimate of the innovations process. The estimate of the innovations process from the finite interval of data h...
详细信息
This paper proposes an estimation method of the innovations model in closed loop environment by using the estimate of the innovations process. The estimate of the innovations process from the finite interval of data has a bias, so are the estimate of the proposed method. However, it is analyzed that the bias can be reduced. The Kalman gain and the covariance of the innovations process are estimated by using a semi-definite programming problem previously proposed by the authors. Numerical simulation illustrates the proposed method gives better performance than Closed-Loop MOESP and PBSID when the data length is large and the past horizon is selected low.
In this paper, we address the problem of obtaining optimal deceptive signaling strategies between two agents, a sender and a receiver, over an ideal channel. Different from classical (cooperative) communication settin...
详细信息
ISBN:
(纸本)9783030324308;9783030324292
In this paper, we address the problem of obtaining optimal deceptive signaling strategies between two agents, a sender and a receiver, over an ideal channel. Different from classical (cooperative) communication settings, here, the agents select their strategies under two different cost measures. For the case when these costs are quadratic, we analyze the Stackelberg equilibrium, where the sender leads the game by committing his/her strategies beforehand. This is an infinite-dimensional optimization problem, where the sender needs to anticipate the receiver's reaction while selecting his/her policy within the general class of stochastic kernels. The specific model we adopt for the underlying information of interest is a discrete-time Markov process generated by a vector-valued linear dynamical system, and at each instant, the information is a realization of a square integrable multivariate random vector. Over both finite and infinite horizons, we show the optimality of memoryless, "linear" signaling rules when the receiver uses a Kalman filter to estimate its information of interest. We develop algorithms that deliver the optimal signaling strategies. Numerical analysis shows that the performance of the sender degrades slightly when the receiver uses the best nonlinear estimator even when the information of interest is a Rademacher random variable rather than Gaussian.
暂无评论