In many knowledge representation formalisms, a constructive semantics is defined based on sequential applications of rules or of a semantic operator. These constructions often share the property that rule applications...
详细信息
In many knowledge representation formalisms, a constructive semantics is defined based on sequential applications of rules or of a semantic operator. These constructions often share the property that rule applications must be delayed until it is safe to do so: until it is known that the condition that triggers the rule will continue to hold. This intuition occurs for instance in the well-founded semantics of logic programs and in autoepistemic logic. In this paper, we formally define the safety criterion algebraically. We study properties of so-called safe inductions and apply our theory to logic programming and autoepistemic logic. For the latter, we show that safe inductions manage to capture the intended meaning of a class of theories on which all classical constructive semantics fail. (C) 2018 Elsevier B.V. All rights reserved.
AdSiF (Agent driven Simulation Framework) provides a programming environment for modeling, simulation, and programming agents, which fuses agent-based, object-oriented, aspect-oriented, and logic programming into a si...
详细信息
AdSiF (Agent driven Simulation Framework) provides a programming environment for modeling, simulation, and programming agents, which fuses agent-based, object-oriented, aspect-oriented, and logic programming into a single paradigm. The power of this paradigm stems from its ontological background and the paradigms it embraces and integrates into a single paradigm called state-oriented programming. AdSiF commits to describe what exists and to model the agent reasoning abilities, which thereby drives model behaviors. Basically, AdSiF provides a knowledgebase and a depth first search mechanism for reasoning. It is possible to model different search mechanism for reasoning but depth first search is a default search mechanism for first order reasoning. The knowledge base consists of facts and predicates. The reasoning mechanism is combined with a dual-world representation, it is defined as an inner representation of a simulated environment, and it is constructed from time-stamped sensory data (or beliefs) obtained from that environment even when these data consist of errors. This mechanism allows the models to make decisions using the historical data of the models and its own states. The study provides a novel view to simulation and agent-modeling using a script-based graph programming structuring state-oriented programming with a multi-paradigm approach. The study also enhances simulation modeling and agent programming using logic programming and aspect orientation. It provides a solution framework for continuous and discrete event simulation and allows modelers to use their own simulation time management, event handling, distributed, and real time simulation algorithms. (C) 2018 Elsevier B.V. All rights reserved.
PRISM is a probabilistic programming language based on Prolog, augmented with primitives to represent probabilistic choice. It is implemented using a combination of low level support from a modified version of B-Prolo...
详细信息
PRISM is a probabilistic programming language based on Prolog, augmented with primitives to represent probabilistic choice. It is implemented using a combination of low level support from a modified version of B-Prolog, source level program transformation, and libraries for inference and learning implemented in C. More recently, developers working with functional programming languages have taken the approach of embedding probabilistic primitives into an existing language, with little or no modification to the host language, often by using delimited continuations. Captured continuations represent pieces of the probabilistic program which can be manipulated to achieve a great variety of computational effects useful for inference. In this paper, I will describe an approach based on delimited control operators recently introduced into SWI Prolog. These are used to create a system of nested effect handlers which together implement a core functionality of PRISM the building of explanation graphs entirely in Prolog and using an order of magnitude less code. Other declarative programming tools, such as constraint logic programming, are used to implement tools for inference, such as the inside-outside and EM algorithms, lazy best-first explanation search, and MCMC samplers. By embedding the functionality of PRISM into SWI Prolog, users gain access to its rich libraries and development environment. By expressing the functionality of PRISM in a small amount of pure, high-level Prolog, this implementation facilitates further experimentation with the mechanisms of probabilistic logic programming, including new probabilistic modelling features and inference algorithms, such as variational inference in models with real-valued variables. (C) 2018 Elsevier Inc. All rights reserved.
Knowledge representation and inference in AI have been traditionally divided between logic-based and statistical approaches. During the past decade, the rapidly developing area of Statistical Relational Learning aims ...
详细信息
ISBN:
(纸本)9780769546735
Knowledge representation and inference in AI have been traditionally divided between logic-based and statistical approaches. During the past decade, the rapidly developing area of Statistical Relational Learning aims to combine the two frameworks for representation and inference. In many cases, these works include probabilistic reasoning within logic programming frameworks. These attempts are restricted in the sense that they use only two-valued negation-as-failure semantics. However, well-founded semantics is a widely accepted three-valued-logic negation semantics scheme, which is implemented in certain logic programming frameworks. In this paper we introduce probabilistic inference under the well-founded semantics scheme in a single Probabilistic logic programming framework, where the uncertainty can be described using both statistical information (probabilities) and a third logic value.
A number of approaches for helping programmers detect incorrect program behaviors are based on combining language-level constructs (such as procedure-level assertions/contracts, program-point assertions, or gradual ty...
详细信息
A number of approaches for helping programmers detect incorrect program behaviors are based on combining language-level constructs (such as procedure-level assertions/contracts, program-point assertions, or gradual types) with a number of associated tools (such as code analyzers and run-time verification frameworks) that automatically check the validity of such constructs. However, these constructs and tools are often not used to their full extent in practice due to excessive run-time overhead, limited expressiveness, and/or limitations in the effectiveness of the tools. Verification frameworks that combine static and dynamic verification techniques and are based on abstraction offer the potential to bridge this gap. In this paper we explore the effectiveness of abstract interpretation in detecting parts of program specifications that can be statically simplified to true or false, as well as in reducing the cost of the run-time checks required for the remaining parts of these specifications. Starting with a semantics for programs with assertion checking, and for assertion simplification based on static analysis information obtained via abstract interpretation, we propose and study a number of practical assertion checking "modes," each of which represents a trade-off between code annotation depth, execution time slowdown, and program safety. We then explore these modes in two typical, library oriented scenarios. We also propose program transformation-based methods for taking advantage of the run-time checking semantics to improve the precision of the analysis. Finally, we study experimentally the performance of these techniques. Our experiments illustrate the benefits and costs of each of the assertion checking modes proposed, as well as the benefits obtained from analysis and the proposed transformations in these scenarios. (C) 2017 Elsevier B.V. All rights reserved.
In this paper we focus on logic programming based approach to plan recognition in intrusion detection systems. The goal of an intruder is to attack a computer or a network system for malicious reasons and the goal of ...
详细信息
ISBN:
(纸本)9781467327091
In this paper we focus on logic programming based approach to plan recognition in intrusion detection systems. The goal of an intruder is to attack a computer or a network system for malicious reasons and the goal of the intrusion detection system is to detect the actions of the intruder and warn the network administrator of an impending attack. We show how an intrusion detection system can recognize the plans of the intruder by modeling the domain as a logic program and then reducing the plan recognition problem to computing models of the logic program. This methodology has been used widely for several planning problems and fits very naturally for plan recognition problems. We give an example scenario and show how to model it. Our results are quite satisfactory and we believe that our approach can lead to a generalized solution to plan recognition.
The proppant bed develops and its height grows until it reaches the critical velocity and equilibrium height. This paper proposes a comprehensive mathematical model to evaluate the equilibrium height for slickwater tr...
详细信息
The proppant bed develops and its height grows until it reaches the critical velocity and equilibrium height. This paper proposes a comprehensive mathematical model to evaluate the equilibrium height for slickwater treatment. We use well-accepted published experimental data and models from other groups to validate our model. After that, we investigate the effects of proppant properties and fluid properties on the equilibrium height. This work can provide critical insights to optimize the design of proppant parameters in a hydraulic fracture. Meanwhile, this model can be incorporated into fracture-propagation simulators for simulating proppant transport.
In designing safety-critical infrastructures s.a. railway systems, engineers often have to deal with complex and large-scale designs. Formal methods can play an important role in helping automate various tasks. For ra...
详细信息
In designing safety-critical infrastructures s.a. railway systems, engineers often have to deal with complex and large-scale designs. Formal methods can play an important role in helping automate various tasks. For railway designs formal methods have mainly been used to verify the safety of so-called interlockings through model checking, which deals with state change and rather complex properties, usually incurring considerable computational burden (e.g., the state-space explosion problem). In contrast, we focus on static infrastructure models, and are interested in checking requirements coming from design guidelines and regulations, as usually given by railway authorities or safety certification bodies. Our goal is to automate the tedious manual work that railway engineers do when ensuring compliance with regulations, through using software that is fast enough to do verification on-the-fly, thus being able to be included in the railway design tools, much like a compiler in an IDE. In consequence, this paper describes the integration into the railway design process of formal methods for automatically extracting railway models from the CAD railway designs and for describing relevant technical regulations and expert knowledge as properties to be checked on the models. We employ a variant of Datalog and use the standardized "railway markup language" railML as basis and exchange format for the formalization. We developed a prototype tool and integrated it in industrial railway CAD software, developed under the name RailCOMPLETE(A (R)). This on-the-fly verification tool is a help for the engineer while doing the designs, and is not a replacement to other more heavy-weight software like for doing interlocking verification or capacity analysis. Our tool, through the export into railML, can be easily integrated with these other tools. We apply our tool chain in a Norwegian railway project, the upgrade of the Arna railway station.
We address automated testing and interactive proving of properties involving complex data structures with constraints, like the ones studied in enumerative combinatorics, e.g., permutations and maps. In this paper we ...
详细信息
We address automated testing and interactive proving of properties involving complex data structures with constraints, like the ones studied in enumerative combinatorics, e.g., permutations and maps. In this paper we show testing techniques to check properties of custom data generators for these structures. We focus on random property-based testing and bounded exhaustive testing, to find counterexamples for false conjectures in the Coq proof assistant. For random testing we rely on the existing Coq plugin QuickChick and its toolbox to write random generators. For bounded exhaustive testing, we use logic programming to generate all the data up to a given size. We also propose an extension of QuickChick with bounded exhaustive testing based on generators developed inside Coq, but also on correct-by-construction generators developed with Why3. These tools are applied to an original Coq formalization of the combinatorial structures of permutations and rooted maps, together with some operations on them and properties about them. Recursive generators are defined for each combinatorial family. They are used for debugging properties which are finally proved in Coq. This large case study is also a contribution in enumerative combinatorics.
A logical approach to compose qualitative shape descriptors (LogC-QSD) is presented in this paper. Each object shape is described qualitatively by its edges, angles, convexities, and lengths. LogC-QSD describes the sh...
详细信息
A logical approach to compose qualitative shape descriptors (LogC-QSD) is presented in this paper. Each object shape is described qualitatively by its edges, angles, convexities, and lengths. LogC-QSD describes the shape of composed objects qualitatively adding circuits to describe the connections among the shapes. It also infers new angles and lengths using composition tables. Its main contributions are: (i) describing qualitatively the resulting boundary of connecting N shapes and (ii) its application to solve spatial reasoning tests. LogC-QSD approach has been implemented using Prolog programming language, which is based on Horn clauses and first order logic. The testing framework was SWI-Prolog on the LogC-QSD dataset. The obtained results show that the LogC-QSD approach was able to correctly answer all the questions in the LogC-QSD dataset, which involved compositions up to five shapes. The correct answer for 60% of the questions was obtained in an average time of 2.45.10(-4) s by comparing the concavities and right angles of the final QSD composed shape with the possible answers. The rest of the questions required a matching algorithm and they were solved by LogC-QSD in an average time of 19.50.10(-4) s. Analysis of the execution times obtained showed that the algorithmic cost of LogC-QSD is lower than O(n(2)) in the worst case. (C) 2018 Elsevier B.V. All rights reserved.
暂无评论