We study the computability of conditional probability, a fundamental notion in probability theory and Bayesian statistics. In the elementary discrete setting, a ratio of probabilities defines conditional probability. ...
详细信息
ISBN:
(纸本)9780769544120
We study the computability of conditional probability, a fundamental notion in probability theory and Bayesian statistics. In the elementary discrete setting, a ratio of probabilities defines conditional probability. In more general settings, conditional probability is defined axiomatically, and the search for more constructive definitions is the subject of a rich literature in probability theory and statistics. However, we show that in general one cannot compute conditional probabilities. Specifically, we construct a pair of computable random variables (X, Y) in the unit interval whose conditional distribution P[Y| X] encodes the halting problem. Nevertheless, probabilistic inference has proven remarkably successful in practice, even in infinite-dimensional continuous settings. We prove several results giving general conditions under which conditional distributions are computable. In the discrete or dominated setting, under suitable computability hypotheses, conditional distributions are computable. Likewise, conditioning is a computable operation in the presence of certain additional structure, such as independent absolutely continuous noise.
We prove a uniformly computable version of de Finetti's theorem on exchangeable sequences of real random variables. In the process, we develop machinery for computably recovering a, distribution from its sequence ...
详细信息
ISBN:
(纸本)9783642030727
We prove a uniformly computable version of de Finetti's theorem on exchangeable sequences of real random variables. In the process, we develop machinery for computably recovering a, distribution from its sequence of moments;which suffices to prove the theorem in the case of (almost sorely) continuous directing random measures. In the general case, we give a proof inspired by a randomized algorithm which succeeds with probability one. Finally, we show how, as a consequence of the main theorem, exchangeable stochastic processes in probabilistic functional programminglanguages can be rewritten as procedures that do not use mutation.
probabilistic incremental program evolution (PIPE) is a novel technique for automatic program synthesis. We combine probability vector coding of program instructions, population-based incremental learning, and tree-co...
详细信息
probabilistic incremental program evolution (PIPE) is a novel technique for automatic program synthesis. We combine probability vector coding of program instructions, population-based incremental learning, and tree-coded programs like those used in some variants of genetic programming (GP). PIPE iteratively generates successive populations of functional programs according to an adaptive probability distribution over all possible programs. Each iteration, it uses the best program to refine the distribution. Thus, it stochastically generates better and better programs. Since distribution refinements depend only on the best program of the current population, PIPE can evaluate program populations efficiently when the goal is to discover a program with minimal runtime. We compare PIPE to GP on a function regression problem and the 6-bit parity problem. We also use PIPE to solve tasks in partially observable mazes, where the best programs have minimal runtime.
Randomized algorithms, or probabilistic algorithms, extend the notion of algorithm by introducing input of random data and random choices in the process of computation. A new mathematical theory of the semantic domain...
详细信息
probabilistic programming languages (PPLs) are becoming increasingly important in many scientific disciplines, such as economics, epidemiology, and biology, to extract meaning from sources of data while accounting for...
详细信息
probabilistic programming languages (PPLs) are becoming increasingly important in many scientific disciplines, such as economics, epidemiology, and biology, to extract meaning from sources of data while accounting for one's uncertainty. The key idea of probabilisticprogramming is to decouple inference and model specification, thus allowing the practitioner to approach their task at hand using Bayesian inference, without requiring extensive knowledge in programming or computational statistics. At the same time, the complexity of problem settings in which PPLs are employed is steadily increasing, both in terms of project size and model complexity, calling for more flexible and efficient *** this work, we describe ***, a general-purpose PPL, which is designed to be flexible, efficient, and easy to use. *** is built on top of the Julia programming language, which is known for its high performance and ease-of-use. We describe the design of ***, contextualizing it within different types of users and use cases, its key features, and how it can be used to solve a wide range of problems. We also provide a brief overview of the ecosystem around ***, including the different libraries and tools that can be used in conjunction with it. Finally, we provide a few examples of how *** can be used in practice.
Synchronous languages are now a standard industry tool for critical embedded systems. Designers write high-level specifications by composing streams of values using block diagrams. These languages have been recently e...
详细信息
Synchronous languages are now a standard industry tool for critical embedded systems. Designers write high-level specifications by composing streams of values using block diagrams. These languages have been recently extended with Bayesian reasoning to program state-space models which compute a stream of distributions given a stream of observations. Yet, the semantics of probabilistic models is only defined for scheduled equations – a significant limitation compared to dataflow synchronous languages and block *** this paper we propose a new operational semantics and a new denotational semantics for a probabilistic synchronous language that are both schedule agnostic. The key idea is to externalize the source of randomness and interpret a probabilistic expression as a stream of functions mapping random elements to a value and positive score. The operational semantics interprets expressions as state machines where mutually recursive equations are evaluated using a fixpoint operator. The denotational semantics directly manipulates streams and is thus a better fit to reason about program equivalence. We use the denotational semantics to prove the correctness of a program transformation required to run an optimized inference algorithm for state-space models with constant parameters.
暂无评论