In this paper we present the probabilistic Operations Warranted for Energy Reliability Evaluation and Diagnostics (POWERED) hybrid artificial intelligence (HAI)/ machine learning (ML) tool for diagnosing and predictin...
详细信息
ISBN:
(纸本)9781665460538
In this paper we present the probabilistic Operations Warranted for Energy Reliability Evaluation and Diagnostics (POWERED) hybrid artificial intelligence (HAI)/ machine learning (ML) tool for diagnosing and predicting performance and remaining useful life (RUL) of electrical transformers to increase reliability and inform maintenance. POWERED incorporates electrical, thermal, and environmental factors that influence transformer degradation to perform predictive analytics and forecast their health and status (H&S) indicators (internal temperatures, dissolved gases) to predict impending failures and RUL more accurately. POWERED uses a rich and modular probabilistic modeling approach that holistically integrates various types of models including sub-symbolic (i.e., purely data-driven) models and symbolic (i.e., Bayesian and physics-based) models. POWERED's predictive analytics are demonstrated on a rich data set collected over one year from an operational distribution transformer. The data set includes timestamped ambient temperatures, oil and hot spot winding temperatures, and electrical loading and concentrations of dissolved gases. We show how these diverse factors can be correlated and used both in real-time H&S monitoring and what-if analysis under extreme conditions.
We present PrivInfer, an expressive framework for writing and verifying differentially private Bayesian machine learning algorithms. Programs in PrivInfer are written in a rich functional probabilistic programming lan...
详细信息
ISBN:
(纸本)9781450341394
We present PrivInfer, an expressive framework for writing and verifying differentially private Bayesian machine learning algorithms. Programs in PrivInfer are written in a rich functional probabilistic programming language with constructs for performing Bayesian inference. Then, differential privacy of programs is established using a relational refinement type system, in which refinements on probability types are indexed by a metric on distributions. Our framework leverages recent developments in Bayesian inference, probabilistic programming languages, and in relational refinement types. We demonstrate the expressiveness of PrivInfer by verifying privacy for several examples of private Bayesian inference.
The problem of representing and learning complex visual stimuli in the context of modeling the process of conditional reflex formation is considered. The generative probabilistic framework is chosen which has been rec...
详细信息
The problem of representing and learning complex visual stimuli in the context of modeling the process of conditional reflex formation is considered. The generative probabilistic framework is chosen which has been recently successfully applied to cognitive modeling. A model capable of learning different visual stimuli is developed in the form of a program in Church (probabilistic programming language). NAO robot is programmed to detect visual stimuli, to point at selected stimuli in a sequence of trials, and to receive reinforcement signals for correct choices. Conducted experiments showed that the robot can learn stimuli of different types showing different decision-making behavior in a series of trial that could help arranging psychophysiological experiments.
probabilistic programming is the idea of writing models from statistics and machine learning using program notations and reasoning about these models using generic inference engines. Recently its combination with deep...
详细信息
probabilistic programming is the idea of writing models from statistics and machine learning using program notations and reasoning about these models using generic inference engines. Recently its combination with deep learning has been explored intensely, which led to the development of so called deep probabilistic programming languages, such as Pyro, Edward and ProbTorch. At the core of this development lie inference engines based on stochastic variational inference algorithms. When asked to find information about the posterior distribution of a model written in such a language, these algorithms convert this posterior-inference query into an optimisation problem and solve it approximately by a form of gradient ascent or descent. In this paper, we analyse one of the most fundamental and versatile variational inference algorithms, called score estimator or REINFORCE, using tools from denotational semantics and program analysis. We formally express what this algorithm does on models denoted by programs, and expose implicit assumptions made by the algorithm on the models. The violation of these assumptions may lead to an undefined optimisation objective or the loss of convergence guarantee of the optimisation process. We then describe rules for proving these assumptions, which can be automated by static program analyses. Some of our rules use nontrivial facts from continuous mathematics, and let us replace requirements about integrals in the assumptions, such as integrability of functions defined in terms of programs' denotations, by conditions involving differentiation or boundedness, which are much easier to prove automatically (and manually). Following our general methodology, we have developed a static program analysis for the Pyro programming language that aims at discharging the assumption about what we call model-guide support match. Our analysis is applied to the eight representative model-guide pairs from the Pyro webpage, which include sophisticated neural ne
In this paper we consider an important problem in teaching of advanced theory is that you need access to complicated software. In many cases in order to allow students to participate in more challenging classes, like ...
详细信息
1 Abstract: This thesis deals with chance constrained stochastic programming problems. We consider several chance constrained models and we focus on their convexity property. The thesis presents the theory of α-conca...
详细信息
1 Abstract: This thesis deals with chance constrained stochastic programming problems. We consider several chance constrained models and we focus on their convexity property. The thesis presents the theory of α-concave functions and measures as a basic tool for proving the convexity of the problems. We use the results of the theory to prove the convexity of the models first for the continu- ous distributions, then for the discrete distributions of the random vectors. We characterize a large class of the continuous distributions, that satisfy the suffi- cient conditions for the convexity of the given models and we present solving algorithms for these models. We present sufficient conditions for the convexity of the problems with dicrete distributions, too. We also deal with the algorithms for solving non-convex problems and briefly discuss the difficulties that can occur when using these methods.
The Bayesian approach to machine learning amounts to computing posterior distributions of random variables from a probabilistic model of how the variables are related (that is, a prior distribution) and a set of obser...
详细信息
The Bayesian approach to machine learning amounts to computing posterior distributions of random variables from a probabilistic model of how the variables are related (that is, a prior distribution) and a set of observations of variables. There is a trend in machine learning towards expressing Bayesian models as probabilistic programs. As a foundation for this kind of programming, we propose a core functional calculus with primitives for sampling prior distributions and observing variables. We define measure-transformer combinators inspired by theorems in measure theory, and use these to give a rigorous semantics to our core calculus. The original features of our semantics include its support for discrete, continuous, and hybrid measures, and, in particular, for observations of zero-probability events. We compile our core language to a small imperative language that is processed by an existing inference engine for factor graphs, which are data structures that enable many efficient inference algorithms. This allows efficient approximate inference of posterior marginal distributions, treating thousands of observations per second for large instances of realistic models.
Procurement maturity becomes a crucial indicator reflecting how effectively and efficiently a procurement function fulfills the expectations. Purchasing and supply management literature posits several maturity evalu-a...
详细信息
Procurement maturity becomes a crucial indicator reflecting how effectively and efficiently a procurement function fulfills the expectations. Purchasing and supply management literature posits several maturity evalu-ation models providing tools for a comprehensive assessment of excellence. Quality management literature also handles that excellence issue from the process improvement perspective. This study investigates the role of process improvement practices in improving the maturity level of procurement organizations. A maturity assessment survey collects data from 96 purchasing and supply management professionals. We suggest a Bayesian hierarchical mean difference model that deploys a Markov Chain Monte Carlo (MCMC) sampler in inferring posterior parameters. Results indicate that firms regularly practicing process improvement activities have statistically higher performance than rarely or never practicing firms on aggregate procurement maturity and its sub-dimensions. These results emphasize that process improvement escalates procurement maturity from reactive to proactive level. As a novel branch of data science, we discuss the advantages of Bayesian hypothesis testing with the probabilistic programming approach compared to the traditional frequentist hypothesis testing.
This paper presents the first slicing approach for probabilistic programs based on specifications. We show that when probabilistic programs are accompanied by their specifications in the form of pre- and post-conditio...
详细信息
This paper presents the first slicing approach for probabilistic programs based on specifications. We show that when probabilistic programs are accompanied by their specifications in the form of pre- and post-condition, we can exploit this semantic information to produce specification-preserving slices strictly more precise than slices yielded by conventional techniques based on data/control dependency. To achieve this goal, our technique is based on the backward propagation of postconditions via the greatest pre-expectation transformer-the probabilistic counterpart of Dijkstra weakest pre-condition transformer. The technique is termination-sensitive, allowing to preserve the partial as well as the total correctness of probabilistic programs w.r.t. their specifications. It is modular, featuring a local reasoning principle, and is formally proved correct. As fundamental technical ingredients of our technique, we design and prove sound verification condition generators for establishing the partial and total correctness of probabilistic programs, which are of interest on their own and can be exploited elsewhere for other purposes. On the practical side, we demonstrate the applicability of our approach by means of a few illustrative examples and a case study from the probabilistic modelling field. We also describe an algorithm for computing least slices among the space of slices derived by our technique.
暂无评论