The joint roughness coefficient (JRC) is critical to evaluate the strength and deformation behavior of joint rock mass in rock engineering. Various methods have been developed to estimate JRC value based on the statis...
详细信息
The joint roughness coefficient (JRC) is critical to evaluate the strength and deformation behavior of joint rock mass in rock engineering. Various methods have been developed to estimate JRC value based on the statistical parameter of rock joints. The JRC value is uncertain due to the complex, random rock joint. Uncertainty is an essential characteristic of rock joints. However, the traditional determinative method cannot deal with uncertainty during the analysis, evaluation, and characterization of the mechanism for the rock joint. This study developed a novel JRC determination framework to estimate the JRC value and evaluate the uncertainty of rock joints based on symbolic regression and probabilistic programming. The symbolic regression was utilized to generate the general empirical equation with the unknown coefficient for the JRC determination of rock joints. The probabilistic programming was used to quantify the uncertainty of the rock joint roughness. The ten standard rock joint profiles illustrated and investigated the developed framework. And then, the developed framework was applied to the collected rock joint profile from the literature. The predicted JRC value was compared with the traditional empirical equations. The results show that the generalization performance of the developed framework is better than the traditional determinative empirical equation. It provides a scientific, reliable, and helpful to estimate the JRC value and characterize the mechanical behavior of joint rock mass.
The probabilistic programming paradigm is gaining popularity due to the possibility of easily representing probabilistic systems and running a number of off-the-shelf inference algorithms on them. This paper explores ...
详细信息
ISBN:
(纸本)9783031737084;9783031737091
The probabilistic programming paradigm is gaining popularity due to the possibility of easily representing probabilistic systems and running a number of off-the-shelf inference algorithms on them. This paper explores how this paradigm can be used to analyse collective systems, in the form of Markov Population Processes (MPPs). MPPs have been extensively used to represent systems of interacting agents, but their analysis is challenging due to the high computational cost required to perform exact simulations of the systems. We represent MPPs as runs of the approximate variant of the Stochastic Simulation Algorithm (SSA), known as tau-leaping, which can be seen as a probabilistic program. We apply Gaussian Semantics, a recently proposed inference method for probabilistic programs, to analyse it. We show that tau-leaping runs can be effectively analysed using a tailored version of Second Order Gaussian Approximation in which we use a Gaussian Mixture encoding of Poisson distributions. In the resulting analysis, the state of the system is approximated by a multivariate Gaussian Mixture generalizing other common Gaussian approximations such as the Linear Noise Approximation and the Langevin Method. Preliminary numerical experiments show that this approach is able to analyse MPPs with reasonable accuracy on the significant statistics while avoiding expensive numerical simulations.
The core thesis of this paper could easily be reproduced in the original environment. The authors provided an almost fully automated set of scripts that automatically execute the experiments. The experimental data was...
详细信息
Stochastic approximation methods for variational inference have recently gained popularity in the probabilistic programming community since these methods are amenable to automation and allow online, scalable, and univ...
详细信息
Stochastic approximation methods for variational inference have recently gained popularity in the probabilistic programming community since these methods are amenable to automation and allow online, scalable, and universal approximate Bayesian inference. Unfortunately, common probabilistic programming Languages (PPLs) with stochastic approximation engines lack the efficiency of message passing-based inference algorithms with deterministic update rules such as Belief Propagation (BP) and Variational Message Passing (VMP). Still, Stochastic Variational Inference (SVI) and Conjugate-Computation Variational Inference (CVI) provide principled methods to integrate fast deterministic inference techniques with broadly applicable stochastic approximate inference. Unfortunately, implementation of SVI and CVI necessitates manually driven variational update rules, which does not yet exist in most PPLs. In this paper, we cast SVI and CVI explicitly in a message passing-based inference context. We provide an implementation for SVI and CVI in ForneyLab, which is an automated message passing-based probabilistic programming package in the open source Julia language. Through a number of experiments, we demonstrate how SVI and CVI extends the automated inference capabilities of message passing-based probabilistic programming. (C) 2022 The Author(s). Published by Elsevier Inc.
The development process for an environmental model involves multiple iterations of a planning-implementation-assessment cycle. probabilistic programming languages (PPLs) are designed to expedite this process with gene...
详细信息
The development process for an environmental model involves multiple iterations of a planning-implementation-assessment cycle. probabilistic programming languages (PPLs) are designed to expedite this process with general-purpose methods for implementing models, efficiently inferring their parameters, and generating probabilistic predictions. probabilistic programming exists at the intersection of Bayesian statistics, machine learning, and process-based modelling and therefore can be of value to the environmental modelling community. In this review article, we explain how it can be used to accelerate model development and allow for statistical inference using more complicated models and larger data sets than previously possible. Specific challenges and limitations to employing such frameworks are also raised. We provide guidance to help modellers decide whether incorporating probabilistic programming in their work may improve the efficiency and quality of their analyses.
Agricultural nitrate emissions within a river catchment are, due to rainfall and other sources of natural variation, uncertain. A regulator aiming to reduce nitrate emissions into surface and groundwater faces a trade...
详细信息
Agricultural nitrate emissions within a river catchment are, due to rainfall and other sources of natural variation, uncertain. A regulator aiming to reduce nitrate emissions into surface and groundwater faces a trade-off between reliability in achieving emission standards and the cost of compliance to agriculture. This paper explores this trade-off by comparing different assumptions about the probability distribution of nitrate emissions and thus the probabilistic constraint included in the catchment model. Three categories of probabilistic constraints are considered: (1) nonparametric, (2) normal and (3) lognormal. The results indicate that the restrictiveness of the non-parametric assumption could lead to a significant reduction in profit relative to the normal and lognormal. The lognormal assumption, although it is theoretically correct, cannot be generalised to the case of correlated emissions. However, ignoring the dependence between different sources of nitrate emissions introduces more bias than mis-specifying their distribution. Therefore a probabilistic constraint based on a correlated normal distribution of emissions gives the best approximation for nitrate emissions in this study. (C) 2002 Elsevier Science B.V. All rights reserved.
We spell out the paradigm of exact conditioning as an intuitive and powerful way of conditioning on observations in probabilistic programs. This is contrasted with likelihood-based scoring known from languages such as...
详细信息
We spell out the paradigm of exact conditioning as an intuitive and powerful way of conditioning on observations in probabilistic programs. This is contrasted with likelihood-based scoring known from languages such as Stan. We study exact conditioning in the cases of discrete and Gaussian probability, presenting prototypical languages for each case and giving semantics to them. We make use of categorical probability (namely Markov and CD categories) to give a general account of exact conditioning, which avoids limits and measure theory, instead focusing on restructuring dataflow and program equations. The correspondence between such categories and a class of programming languages is made precise by defining the internal language of a CD category.
We present a new approach to the design and implementation of probabilistic programming languages (PPLs), based on the idea of stochastically estimating the probability density ratios necessary for probabilistic infer...
详细信息
We present a new approach to the design and implementation of probabilistic programming languages (PPLs), based on the idea of stochastically estimating the probability density ratios necessary for probabilistic inference. By relaxing the usual PPL design constraint that these densities be computed exactly, we are able to eliminate many common restrictions in current PPLs, to deliver a language that, for the first time, simultaneously supports first-class constructs for marginalization and nested inference, unrestricted stochastic control flow, continuous and discrete sampling, and programmable inference with custom proposals. At the heart of our approach is a new technique for compiling these expressive probabilistic programs into randomized algorithms for unbiasedly estimating their densities and density reciprocals. We employ these stochastic probability estimators within modified Monte Carlo inference algorithms that are guaranteed to be sound despite their reliance on inexact estimates of density ratios. We establish the correctness of our compiler using logical relations over the semantics of lambda(SP) a new core calculus for modeling and inference with stochastic probabilities. We also implement our approach in an open-source extension to Gen, called GenSP, and evaluate it on six challenging inference problems adapted from the modeling and inference literature. We find that: (1) GenSP can automate fast density estimators for programs with very expensive exact densities;(2) convergence of inference is mostly unaffected by the noise from these estimators;and (3) our sound-by-construction estimators are competitive with hand-coded density estimators, incurring only a small constant-factor overhead.
We study semantic models of probabilistic programming languages over graphs, and establish a connection to graphons from graph theory and combinatorics. We show that every well-behaved equational theory for our graph ...
详细信息
We study semantic models of probabilistic programming languages over graphs, and establish a connection to graphons from graph theory and combinatorics. We show that every well-behaved equational theory for our graph probabilistic programming language corresponds to a graphon, and conversely, every graphon arises in this way. We provide three constructions for showing that every graphon arises from an equational theory. The first is an abstract construction, using Markov categories and monoidal indeterminates. The second and third are more concrete. The second is in terms of traditional measure theoretic probability, which covers 'black-and-white' graphons. The third is in terms of probability monads on the nominal sets of Gabbay and Pitts. Specifically, we use a variation of nominal sets induced by the theory of graphs, which covers Erdos-Renyi graphons. In this way, we build new models of graph probabilistic programming from graphons.
Compared to the wide array of advanced Monte Carlo methods supported by modern probabilistic programming languages (PPLs), PPL support for variational inference (VI) is less developed: users are typically limited to a...
详细信息
Compared to the wide array of advanced Monte Carlo methods supported by modern probabilistic programming languages (PPLs), PPL support for variational inference (VI) is less developed: users are typically limited to a predefined selection of variational objectives and gradient estimators, which are implemented monolithically (and without formal correctness arguments) in PPL backends. In this paper, we propose a more modular approach to supporting variational inference in PPLs, based on compositional program transformation. In our approach, variational objectives are expressed as programs, that may employ first-class constructs for computing densities of and expected values under user-defined models and variational families. We then transform these programs systematically into unbiased gradient estimators for optimizing the objectives they define. Our design enables modular reasoning about many interacting concerns, including automatic differentiation, density accumulation, tracing, and the application of unbiased gradient estimation strategies. Additionally, relative to existing support for VI in PPLs, our design increases expressiveness along three axes: (1) it supports an open-ended set of user-defined variational objectives, rather than a fixed menu of options;(2) it supports a combinatorial space of gradient estimation strategies, many not automated by today's PPLs;and (3) it supports a broader class of models and variational families, because it supports constructs for approximate marginalization and normalization (previously introduced only for Monte Carlo inference). We implement our approach in an extension to the Gen probabilistic programming system (***, implemented in JAX), and evaluate our automation on several deep generative modeling tasks, showing minimal performance overhead vs. hand-coded implementations and performance competitive with well-established open-source PPLs.
暂无评论