We present a system for recognising human activity given a symbolic representation of video content. The input of our system is a set of time-stamped short-term activities (STA) detected on video frames. The output is...
详细信息
We present a system for recognising human activity given a symbolic representation of video content. The input of our system is a set of time-stamped short-term activities (STA) detected on video frames. The output is a set of recognised long-term activities (LTA), which are pre-defined temporal combinations of STA. The constraints on the STA that, if satisfied, lead to the recognition of an LTA, have been expressed using a dialect of the Event Calculus. In order to handle the uncertainty that naturally occurs in human activity recognition, we adapted this dialect to a state-of-the-art probabilistic logicprogramming framework. We present a detailed evaluation and comparison of the crisp and probabilistic approaches through experimentation on a benchmark dataset of human surveillance videos.
Answer Set programming (ASP) is a popular framework for modelling combinatorial problems. However, ASP cannot be used easily for reasoning about uncertain information. Possibilistic ASP (PASP) is an extension of ASP t...
详细信息
Answer Set programming (ASP) is a popular framework for modelling combinatorial problems. However, ASP cannot be used easily for reasoning about uncertain information. Possibilistic ASP (PASP) is an extension of ASP that combines possibilistic logic and ASP. In PASP a weight is associated with each rule, whereas this weight is interpreted as the certainty with which the conclusion can be established when the body is known to hold. As such, it allows us to model and reason about uncertain information in an intuitive way. In this paper we present new semantics for PASP in which rules are interpreted as constraints on possibility distributions. Special models of these constraints are then identified as possibilistic answer sets. In addition, since ASP is a special case of PASP in which all the rules are entirely certain, we obtain a new characterization of ASP in terms of constraints on possibility distributions. This allows us to uncover a new form of disjunction, called weak disjunction, that has not been previously considered in the literature. In addition to introducing and motivating the semantics of weak disjunction, we also pinpoint its computational complexity. In particular, while the complexity of most reasoning tasks coincides with standard disjunctive ASP, we find that brave reasoning for programs with weak disjunctions is easier.
We introduce novel mathematical models and algorithms to generate (shortest or k different) explanations for biomedical queries, using answer set programming. We implement these algorithms and integrate them in BIOQUE...
详细信息
We introduce novel mathematical models and algorithms to generate (shortest or k different) explanations for biomedical queries, using answer set programming. We implement these algorithms and integrate them in BIOQUERY-ASP. We illustrate the usefulness of these methods with some complex biomedical queries related to drug discovery, over the biomedical knowledge resources PHARMGKB, DRUGBANK, BIOGRID, CTD, SIDER, DISEASE ONTOLOGY, and ORPHADATA.
Probabilistic logic programs are logic programs in which some of the facts are annotated with probabilities. This paper investigates how classical inference and learning tasks known from the graphical model community ...
详细信息
Probabilistic logic programs are logic programs in which some of the facts are annotated with probabilities. This paper investigates how classical inference and learning tasks known from the graphical model community can be tackled for probabilistic logic programs. Several such tasks, such as computing the marginals, given evidence and learning from (partial) interpretations, have not really been addressed for probabilistic logic programs before. The first contribution of this paper is a suite of efficient algorithms for various inference tasks. It is based on the conversion of the program and the queries and evidence to a weighted Boolean formula. This allows us to reduce inference tasks to well-studied tasks, such as weighted model counting, which can be solved using state-of-the-art methods known from the graphical model and knowledge compilation literature. The second contribution is an algorithm for parameter estimation in the learning from interpretations setting. The algorithm employs expectation-maximization, and is built on top of the developed inference algorithms. The proposed approach is experimentally evaluated. The results show that the inference algorithms improve upon the state of the art in probabilistic logicprogramming, and that it is indeed possible to learn the parameters of a probabilistic logic program from interpretations.
Aggregation functions are widely used in answer set programming for representing and reasoning on knowledge involving sets of objects collectively. Current implementations simplify the structure of programs in order t...
详细信息
Aggregation functions are widely used in answer set programming for representing and reasoning on knowledge involving sets of objects collectively. Current implementations simplify the structure of programs in order to optimize the overall performance. In particular, aggregates are rewritten into simpler forms known as monotone aggregates. Since the evaluation of normal programs with monotone aggregates is in general on a lower complexity level than the evaluation of normal programs with arbitrary aggregates, any faithful translation function must introduce disjunction in rule heads in some cases. However, no function of this kind is known. The paper closes this gap by introducing a polynomial, faithful, and modular translation for rewriting common aggregation functions into the simpler form accepted by current solvers. A prototype system allows for experimenting with arbitrary recursive aggregates, which are also supported in the recent version 4.5 of the grounder GRINGO, using the methods presented in this paper.
Recently, the combination of probability, logic and learning has received considerable attention in the artificial intelligence and machine learning communities; see e.g. Getoor and Taskar (2007); De Raedt et al. (200...
Recently, the combination of probability, logic and learning has received considerable attention in the artificial intelligence and machine learning communities; see e.g. Getoor and Taskar (2007); De Raedt et al. (2008). Computational logic often plays a major role in these developments since it forms the theoretical backbone for much of the work in probabilistic programming and logical and relational learning. Contemporary work in this area is often application- and experiment-driven, but is also concerned with the theoretical foundations of formalisms and inference procedures and with advanced implementation technology that scales well.
Input-output examples have emerged as a practical and user-friendly specification mechanism for program synthesis in many environments. While example-driven tools have demonstrated tangible impact that has inspired ad...
详细信息
Input-output examples have emerged as a practical and user-friendly specification mechanism for program synthesis in many environments. While example-driven tools have demonstrated tangible impact that has inspired adoption in industry, their underlying semantics are less well-understood: what are "examples" and how do they relate to other kinds of specifications? This paper demonstrates that examples can, in general, be interpreted as refinement types. Seen in this light, program synthesis is the task of finding an inhabitant of such a type. This insight provides an immediate semantic interpretation for examples. Moreover, it enables us to exploit decades of research in type theory as well as its correspondence with intuitionistic logic rather than designing ad hoc theoretical frameworks for synthesis from scratch. We put this observation into practice by formalizing synthesis as proof search in a sequent calculus with intersection and union refinements that we prove to be sound with respect to a conventional type system. In addition, we show how to handle negative examples, which arise from user feedback or counterexample-guided loops. This theory serves as the basis for a prototype implementation that extends our core language to support ML-style algebraic data types and structurally inductive functions. Users can also specify synthesis goals using polymorphic refinements and import monomorphic libraries. The prototype serves as a vehicle for empirically evaluating a number of different strategies for resolving the nondeterminism of the sequent calculus bottom-up theorem-proving, term enumeration with refinement type checking, and combinations of both the results of which classify, explain, and validate the design choices of existing synthesis systems. It also provides a platform for measuring the practical value of a specification language that combines "examples" with the more general expressiveness of refinements.
We present a method for verifying the correctness of imperative programs which is based on the automated transformation of their specifications. Given a program prog, we consider a partial correctness specification of...
详细信息
We present a method for verifying the correctness of imperative programs which is based on the automated transformation of their specifications. Given a program prog, we consider a partial correctness specification of the form {phi} prog {psi}, where the assertions phi and psi are predicates defined by a set Spec of possibly recursive Horn clauses with linear arithmetic (LA) constraints in their premise (also called constrained Horn clauses). The verification method consists in constructing a set PC of constrained Horn clauses whose satisfiability implies that {phi} prog {psi} is valid. We highlight some limitations of state-of-the-art constrained Horn clause solving methods, here called LA-solving methods, which prove the satisfiability of the clauses by looking for linear arithmetic interpretations of the predicates. In particular, we prove that there exist some specifications that cannot be proved valid by any of those LA-solving methods. These specifications require the proof of satisfiability of a set PC of constrained Horn clauses that contain nonlinear clauses (that is, clauses with more than one atom in their premise). Then, we present a transformation, called linearization, that converts PC into a set of linear clauses (that is, clauses with at most one atom in their premise). We show that several specifications that could not be proved valid by LA-solving methods, can be proved valid after linearization. We also present a strategy for performing linearization in an automatic way and we report on some experimental results obtained by using a preliminary implementation of our method.
The special issue of theory and practice of logic programming (TPLP) consists of the regular papers accepted for presentation at International Conference of logicprogramming (ICLP) 2015. Altogether 89 submissions of ...
详细信息
The special issue of theory and practice of logic programming (TPLP) consists of the regular papers accepted for presentation at International Conference of logicprogramming (ICLP) 2015. Altogether 89 submissions of abstracts were received of which 74 resulted in submissions of papers. The program chairs, acting as guest editors of this special issue, organized the refereeing process with the help of the program committee and external reviewers. Each paper was reviewed by at least three anonymous referees who provided full written evaluations. The paper Complexity and Compilation of GZ-Aggregates in Answer Set programming, by Mario Alviano and Nicola Leone, was awarded the ICLP 2015 Best Paper Prize. In addition to the presentations of accepted papers, the technical program of ICLP 2015 included two panels, on The Future Publication of ICLP Proceedings, chaired by Torsten Schaub, and Teaching Computer Science and Declarative programming in Schools and Universities, chaired by Manuel Hermenegildo.
Timed Concurrent Constraint programming (tcc) is a declarative model for concurrency offering a logic for specifying reactive systems, i.e., systems that continuously interact with the environment. The universal tcc f...
详细信息
Timed Concurrent Constraint programming (tcc) is a declarative model for concurrency offering a logic for specifying reactive systems, i.e., systems that continuously interact with the environment. The universal tcc formalism (utcc) is an extension of tcc with the ability to express mobility. Here mobility is understood as communication of private names as typically done for mobile systems and security protocols. In this paper we consider the denotational semantics for tcc, and extend it to a "collecting" semantics for utcc based on closure operators over sequences of constraints. Relying on this semantics, we formalize a general framework for data flow analyses of tcc and utcc programs by abstract interpretation techniques. The concrete and abstract semantics that we propose are compositional, thus allowing us to reduce the complexity of data flow analyses. We show that our method is sound and parametric with respect to the abstract domain. Thus, different analyses can be performed by instantiating the framework. We illustrate how it is possible to reuse abstract domains previously defined for logicprogramming to perform, for instance, a groundness analysis for tcc programs. We show the applicability of this analysis in the context of reactive systems. Furthermore, we also make use of the abstract semantics to exhibit a secrecy flaw in a security protocol. We also show how it is possible to make an analysis which may show that tcc programs are suspension-free. This can be useful for several purposes, such as for optimizing compilation or for debugging.
暂无评论