This thesis deals with chance constrained stochastic programming pro- blems. The first chapter is an introduction. We formulate several stochastic pro- gramming problems in the second chapter. In chapter 3 we present ...
详细信息
This thesis deals with chance constrained stochastic programming pro- blems. The first chapter is an introduction. We formulate several stochastic pro- gramming problems in the second chapter. In chapter 3 we present the theory of α-concave functions and measures as a basic tool for proving convexity of the problems formulated in chapter 2 for the continuous distributions of the random vectors. We use the results of the theory to characterize a large class of the conti- nuous distributions, that satisfy the sufficient conditions for the convexity and to prove convexity of concrete sets. In chapter 4 we present sufficient conditions for the convexity of the problems and we briefly discuss the method of the p-level ef- ficient points. In chapter 5 we solve a portfolio selection problem using Kataokas model. 1
We extend a call-by-need variant of PCF with a binary probabilistic fair choice operator, which makes a lazy and typed variant of probabilistic functional programming. We define a contextual equivalence that respects ...
详细信息
We extend a call-by-need variant of PCF with a binary probabilistic fair choice operator, which makes a lazy and typed variant of probabilistic functional programming. We define a contextual equivalence that respects the expected convergence of expressions and prove a corresponding context lemma. This enables us to show correctness of several program transformations with respect to contextual equivalence. Distribution -equivalence of expressions of numeric type is introduced. While the notion of contextual equivalence stems from program semantics, the notion of distribution equivalence is a direct description of the stochastic model. Our main result is that both notions are compatible: We show that for closed expressions of numeric type contextual equivalence and distribution-equivalence coincide. This provides a strong and often operationally feasible criterion for contextual equivalence of expressions and programs.(c) 2023 Elsevier Inc. All rights reserved.
For applications in condition-based maintenance of nuclear systems, the assessment of fault severity is crucial. In this work, we developed a framework that allows for direct inference of the probability distributions...
详细信息
For applications in condition-based maintenance of nuclear systems, the assessment of fault severity is crucial. In this work, we developed a framework that allows for direct inference of the probability distributions of possible faults in a system. We employed a model-based approach with model residuals generated from analytical redundancy relations provided by physics-based models of the system components. From real-time sensor readings, the values of the model residuals can be calculated, and the posterior probability distributions of the faults can be computed directly using the methods of Bayesian networks. From the posterior distribution of each fault, one can estimate the fault probability based on a chosen threshold and assess the severity of the fault. By eliminating the discretization and simplifications in middle steps, this approach allows us to leverage the available computational resources to provide more accurate fault probability estimates and severity assessments.
Computing has entered the era of uncertain data, in which hardware and software generate and reason about estimates. Applications use estimates from sensors, machine learning, big data, humans, and approximate hardwar...
详细信息
Computing has entered the era of uncertain data, in which hardware and software generate and reason about estimates. Applications use estimates from sensors, machine learning, big data, humans, and approximate hardware and software. Unfortunately, developers face pervasive correctness, programmability, and optimization problems due to estimates. Most programming languages unfortunately make these problems worse. We propose a new programming abstraction called Uncertain < T > embedded into languages, such as C#, C++, Java, Python, and JavaScript. Applications that consume estimates use familiar discrete operations for their estimates;overloaded conditional operators specify hypothesis tests and applications use them control false positives and negatives;and new compositional operators express domain knowledge. By carefully restricting the expressiveness, the runtime automatically implements correct statistical reasoning at conditionals, relieving developers of the need to implement or deeply understand statistics. We demonstrate substantial programmability, correctness, and efficiency benefits of this programming model for GPS sensor navigation, approximate computing, machine learning, and xBox.
Arguing for the need to combine declarative and probabilistic programming, Barany et al. (TODS 2017) recently introduced a probabilistic extension of Datalog as a "purely declarative probabilistic programming lan...
详细信息
Arguing for the need to combine declarative and probabilistic programming, Barany et al. (TODS 2017) recently introduced a probabilistic extension of Datalog as a "purely declarative probabilistic programming language?' We revisit this language and propose a more principled approach towards defining its semantics based on stochastic kernels and Markov processes-standard notions from probability theory. This allows us to extend the semantics to continuous probability distributions, thereby settling an open problem posed by Barany et al. We show that our semantics is fairly robust, allowing both parallel execution and arbitrary chase orders when evaluating a program. We cast our semantics in the framework of infinite probabilistic databases (Grohe and Lindner, LMCS 2022) and show that the semantics remains meaningful even when the input of a probabilistic Datalog program is an arbitrary probabilistic database.
作者:
Otis, RichardCALTECH
Jet Prop Lab Engn & Sci Directorate Pasadena CA 91109 USA
Uncertainty quantification is an important part of materials science and serves a role not only in assessing the accuracy of a given model, but also in the rational reduction of uncertainty via new models and experime...
详细信息
Uncertainty quantification is an important part of materials science and serves a role not only in assessing the accuracy of a given model, but also in the rational reduction of uncertainty via new models and experiments. In this review, recent advances and challenges related to uncertainty quantification for Calphad-based thermodynamic and kinetic models are discussed, with particular focus on approaches using Markov chain Monte Carlo sampling methods. General differentiable and probabilistic programming frameworks are identified as an enabling and rapidly-maturing new technology for scalable uncertainty quantification in complex physics-based models. Special challenges for uncertainty reduction and Bayesian design-of-experiment for improving Calphadbased models are discussed in light of recent progress in related fields.
We present a novel sampling framework for probabilistic programs. The framework combines two recent ideas-control-data separation and logical condition propagation-in a nontrivial manner so that the two ideas boost th...
详细信息
We present a novel sampling framework for probabilistic programs. The framework combines two recent ideas-control-data separation and logical condition propagation-in a nontrivial manner so that the two ideas boost the benefits of each other. We implemented our algorithm on top of Anglican. The experimental results demonstrate our algorithm's efficiency, especially for programs with while loops and rare observations. (c) 2023 Elsevier Inc. All rights reserved.
The paper presents firstly a brief review of the basic principles of Stochastic Limit Analysis, with particular reference to the two theorems that allow to bound the probability of collapse of a structure when the act...
详细信息
The paper presents firstly a brief review of the basic principles of Stochastic Limit Analysis, with particular reference to the two theorems that allow to bound the probability of collapse of a structure when the acting forces are deterministic and/or random patterns. The two theorems give place to two conceptually different approaches, although they should produce in principle the same results. Since both procedures can be most times applied only for approximate evaluation of the collapse probability, the major points and the specific features of the two methods, the kinematical and the static method, are reasserted with the purpose to prevent possible misunderstanding due to their deterministic complementarity. Moreover, the problem of the convergence of the static approach to the true collapse probability is discussed; a problem that is not so hardly felt for the kinematical method, whose exploitation, at least for discrete structures with a finite number of possible collapse modes, is quite trivial.
We introduce Tree-AMP, standing for Tree Approximate Message Passing, a python package for compositional inference in high-dimensional tree-structured models. The package provides a unifying framework to study several...
详细信息
We introduce Tree-AMP, standing for Tree Approximate Message Passing, a python package for compositional inference in high-dimensional tree-structured models. The package provides a unifying framework to study several approximate message passing algorithms previously derived for a variety of machine learning tasks such as generalized linear models, inference in multi-layer networks, matrix factorization, and reconstruction using nonseparable penalties. For some models, the asymptotic performance of the algorithm can be theoretically predicted by the state evolution, and the measurements entropy estimated by the free entropy formalism. The implementation is modular by design: each module, which implements a factor, can be composed at will with other modules to solve complex inference tasks. The user only needs to declare the factor graph of the model: the inference algorithm, state evolution and entropy estimation are fully automated. The source code is publicly available at https://***/sphinxteam/tramp and the documentation at https://***/***.
Adversarial computations are a widely studied class of computations where resource-bounded probabilistic adversaries have access to oracles, i.e., probabilistic procedures with private state. These computations arise ...
详细信息
Adversarial computations are a widely studied class of computations where resource-bounded probabilistic adversaries have access to oracles, i.e., probabilistic procedures with private state. These computations arise routinely in several domains, including security, privacy and machine learning. In this paper, we develop program logics for reasoning about adversarial computations in a higher-order setting. Our logics are built on top of a simply typed lambda-calculus extended with a graded monad for probabilities and state. The grading is used to model and restrict the memory footprint and the cost (in terms of oracle calls) of computations. Under this view, an adversary is a higher-order expression that expects as arguments the code of its oracles. We develop unary program logics for reasoning about error probabilities and expected values, and a relational logic for reasoning about coupling-based properties. All logics feature rules for adversarial computations, and yield guarantees that are valid for all adversaries that satisfy a fixed resource policy. We prove the soundness of the logics in the category of quasi-Borel spaces, using a general notion of graded predicate liftings, and we use logical relations over graded predicate liftings to establish the soundness of proof rules for adversaries. We illustrate the working of our logics with simple but illustrative examples.
暂无评论