For practical wide-band direction-of-arrival (DOA) estimation problems, the sparse Bayesian algorithms, implemented by expectation maximization or variational Bayesian inference techniques, generally require prohibiti...
详细信息
For practical wide-band direction-of-arrival (DOA) estimation problems, the sparse Bayesian algorithms, implemented by expectation maximization or variational Bayesian inference techniques, generally require prohibitive computational complexity. In this paper, computationallyefficient wide-band DOA estimation is considered within a sparse Bayesian framework. In particular, two computationallyefficient wide-band DOA estimation methods are proposed, i.e., single and multiple observations sparse Bayesian multitask learning methods, denoted as SO-SBMTL and MO-SBMTL. The proposed methods can independently and jointly process the signals to obtain the DOA estimates, respectively, with significantly reduced computational complexity. In addition, the off-the-grid problem in wide-band DOA estimation is also considered within this framework. The remarkable feature of our proposed methods is that the off-the-grid parameter can be desirably obtained with a closed-form solution, so that numerical searching procedure is appropriately avoided and the computational complexity is substantially reduced. Experimental results have demonstrated that the proposed algorithms can achieve desirable performance with substantially reduced computational complexities. The MO-SBMTL achieves better performance than the SO-SBMTL in the scenarios of static or slow time-varying moving sources, and the off-the-grid algorithm can also efficiently estimate the true DOAs of the sources.
We describe a framework for designing efficient active learning algorithms that are tolerant to random classification noise and are differentially-private. The framework is based on active learning algorithms that are...
详细信息
We describe a framework for designing efficient active learning algorithms that are tolerant to random classification noise and are differentially-private. The framework is based on active learning algorithms that are statistical in the sense that they rely on estimates of expectations of functions of filtered random examples. It builds on the powerful statistical query framework of Kearns (JACM 45(6):983-1006, 1998). We show that any efficient active statistical learning algorithm can be automatically converted to an efficient active learning algorithm which is tolerant to random classification noise as well as other forms of "uncorrelated" noise. The complexity of the resulting algorithms has information-theoretically optimal quadratic dependence on , where is the noise rate. We show that commonly studied concept classes including thresholds, rectangles, and linear separators can be efficiently actively learned in our framework. These results combined with our generic conversion lead to the first computationally-efficientalgorithms for actively learning some of these concept classes in the presence of random classification noise that provide exponential improvement in the dependence on the error over their passive counterparts. In addition, we show that our algorithms can be automatically converted to efficient active differentially-private algorithms. This leads to the first differentially-private active learning algorithms with exponential label savings over the passive case.
Generalized linear models play an essential role in a wide variety of statistical applications. This paper discusses an approximation of the likelihood in these models that can greatly facilitate computation. The basi...
详细信息
Generalized linear models play an essential role in a wide variety of statistical applications. This paper discusses an approximation of the likelihood in these models that can greatly facilitate computation. The basic idea is to replace a sum that appears in the exact log-likelihood by an expectation over the model covariates;the resulting "expected log-likelihood" can in many cases be computed significantly faster than the exact log-likelihood. In many neuroscience experiments the distribution over model covariates is controlled by the experimenter and the expected log-likelihood approximation becomes particularly useful;for example, estimators based on maximizing this expected log-likelihood (or a penalized version thereof) can often be obtained with orders of magnitude computational savings compared to the exact maximum likelihood estimators. A risk analysis establishes that these maximum EL estimators often come with little cost in accuracy (and in some cases even improved accuracy) compared to standard maximum likelihood estimates. Finally, we find that these methods can significantly decrease the computation time of marginal likelihood calculations for model selection and of Markov chain Monte Carlo methods for sampling from the posterior parameter distribution. We illustrate our results by applying these methods to a computationally-challenging dataset of neural spike trains obtained via large-scale multi-electrode recordings in the primate retina.
暂无评论