This paper proposes PRIMA: Probabilistic Ranking with Inter-item competition and Multi-Attribute utility function, which ranks items based on their probabilities of being a user's best choice. This framework is pa...
详细信息
ISBN:
(纸本)9781538646595
This paper proposes PRIMA: Probabilistic Ranking with Inter-item competition and Multi-Attribute utility function, which ranks items based on their probabilities of being a user's best choice. This framework is particularly important in E-commerce applications for making recommendations, predicting sales, and developing pricing strategies. To achieve mathematical tractability, it uses the weight-based multi-attribute utility function to address the inter-attribute tradeoff, where the weight reflects a user's personal preference for each attribute. The proposed work updates the weight from a user's past transactions using the concept of marginal rate of substitution from microeconomics, addresses the interitem competition, and computes the items' probabilities of being a user's best choice. Real user test results show that the proposed framework achieves comparable ranking accuracy to the state-of-the-art work with significant improvements in model simplicity and mathematical tractability.
Accurately monitoring the system's operating point is central to the reliable and economic operation of an electric power grid. Power system state estimation (PSSE) aims to obtain complete voltage magnitude and an...
详细信息
We develop Riemannian Stein Variational Gradient Descent (RSVGD), a Bayesian inference method that generalizes Stein Variational Gradient Descent (SVGD) to Riemann manifold. The benefits are two-folds: (i) for inferen...
详细信息
This work develops a new iterative algorithm, which is called stochastic truncated amplitude flow (STAF), to recover an unknown signal x ∈ R~n from m "phaseless" quadratic equations of the form ψ_i = |a_i~...
详细信息
This work develops a new iterative algorithm, which is called stochastic truncated amplitude flow (STAF), to recover an unknown signal x ∈ R~n from m "phaseless" quadratic equations of the form ψ_i = |a_i~T x|, 1 ≤ i ≤ m. This problem is also known as phase retrieval, which is NP-hard in general. Building on an amplitude-based nonconvex least-squares formulation, STAF proceeds in two stages: s1) Orthogonality-promoting initialization computed using a stochastic variance reduced gradient algorithm;and, s2) Refinements of the initial point through truncated stochastic gradient-type iterations. Both stages handle a single equation per iteration, therefore lending STAF well to Big Data applications. Specifically for independent Gaussian {a_i}_(i=1)~m vectors, STAF recovers exactly any x exponentially fast when there are about as many equations as unknowns. Finally, numerical tests demonstrate that STAF improves upon its competing alternatives.
Developing efficient and scalable algorithms for Latent Dirichlet Allocation (LDA) is of wide interest for many applications. Previous work has developed an O(1) Metropolis-Hastings (MH) sampling method for each token...
详细信息
Maximum mean discrepancy (MMD) has been successfully applied to learn deep generative models for characterizing a joint distribution of variables via kernel mean embedding. In this paper, we present conditional genera...
详细信息
We propose a vector-valued regression problem whose solution is equivalent to the reproducing kernel Hilbert space (RKHS) embedding of the Bayesian posterior distribution. This equivalence provides a new understanding...
详细信息
We propose two stochastic gradient MCMC methods for sampling from Bayesian posterior distributions defined on Riemann manifolds with a known geodesic flow, e.g. hyperspheres. Our methods are the first scalable samplin...
详细信息
Maximum mean discrepancy (MMD) has been successfully applied to learn deep generative models for characterizing a joint distribution of variables via kernel mean embedding. In this paper, we present conditional genera...
ISBN:
(纸本)9781510838819
Maximum mean discrepancy (MMD) has been successfully applied to learn deep generative models for characterizing a joint distribution of variables via kernel mean embedding. In this paper, we present conditional generative moment-matching networks (CGMMN), which learn a conditional distribution given some input variables based on a conditional maximum mean discrepancy (CMMD) criterion. The learning is performed by stochastic gradient descent with the gradient calculated by back-propagation. We evaluate CGMMN on a wide range of tasks, including predictive modeling, contextual generation, and Bayesian dark knowledge, which distills knowledge from a Bayesian model by learning a relatively small CGMMN student network. Our results demonstrate competitive performance in all the tasks.
暂无评论