咨询与建议

看过本文的还看了

相关文献

该作者的其他文献

文献详情 >Computing f-divergences and di... 收藏

Computing f-divergences and distances of high-dimensional probability density functions

作     者:Litvinenko, Alexander Marzouk, Youssef Matthies, Hermann G. Scavino, Marco Spantini, Alessio 

作者机构:Rhein Westfal TH Aachen Dept Math Aachen Germany MIT Dept Aeronaut & Astronaut 77 Massachusetts Ave Cambridge MA 02139 USA TU Braunschweig Carl Friedrich Gauss Fak Aachen Germany Univ Republica IESTA Dept Metodos Matemat Cuantitat Montevideo Uruguay 

出 版 物:《NUMERICAL LINEAR ALGEBRA WITH APPLICATIONS》 (数值线性代数及其应用)

年 卷 期:2023年第30卷第3期

页      面:e2467-e2467页

核心收录:

学科分类:07[理学] 0701[理学-数学] 070101[理学-基础数学] 

基  金:Alexander von Humboldt-Stiftung Deutsche Forschungsgemeinschaft Gay-Lussac Humboldt Research Award 

主  题:computational algorithms f-divergence high dimensional probability density Kullback-Leibler divergence low-rank approximation tensor representation 

摘      要:Very often, in the course of uncertainty quantification tasks or data analysis, one has to deal with high-dimensional random variables. Here the interest is mainly to compute characterizations like the entropy, the Kullback-Leibler divergence, more general f-divergences, or other such characteristics based on the probability density. The density is often not available directly, and it is a computational challenge to just represent it in a numerically feasible fashion in case the dimension is even moderately large. It is an even stronger numerical challenge to then actually compute said characteristics in the high-dimensional case. In this regard it is proposed to approximate the discretized density in a compressed form, in particular by a low-rank tensor. This can alternatively be obtained from the corresponding probability characteristic function, or more general representations of the underlying random variable. The mentioned characterizations need point-wise functions like the logarithm. This normally rather trivial task becomes computationally difficult when the density is approximated in a compressed resp. low-rank tensor format, as the point values are not directly accessible. The computations become possible by considering the compressed data as an element of an associative, commutative algebra with an inner product, and using matrix algorithms to accomplish the mentioned tasks. The representation as a low-rank element of a high order tensor space allows to reduce the computational complexity and storage cost from exponential in the dimension to almost linear.

读者评论 与其他读者分享你的观点

用户名:未登录
我的评分