Normal forms for logic programs under stable/answerset semantics are introduced. We argue that these forms can simplify the study of program properties, mainly consistency. The first normal form, called the kernel of...
详细信息
Normal forms for logic programs under stable/answerset semantics are introduced. We argue that these forms can simplify the study of program properties, mainly consistency. The first normal form, called the kernel of the program, is useful for studying existence and number of answersets. A kernel program is composed of the atoms which are undefined in the Well-founded semantics, which are those that directly affect the existence of answersets. The body of rules is composed of negative literals only. Thus, the kernel form tends to be significantly more compact than other formulations. Also, it is possible to check consistency of kernel programs in terms of colorings of the Extended Dependency Graph program representation which we previously developed. The second normal form is called 3-kernel. A 3-kernel program is composed of the atoms which are undefined in the Well-founded semantics. Rules in 3-kernel programs have at most two conditions, and each rule either belongs to a cycle, or defines a connection between cycles. 3-kernel programs may have positive conditions. The 3-kernel normal form is very useful for the static analysis of program consistency, i.e. the syntactic characterization of existence of answersets. This result can be obtained thanks to a novel graph-like representation of programs, called Cycle Graph which presented in the companion article Costantini (2004b).
Schlipf (1995) proved that Stable Logic programming (SLP) solves all NP decision problems. We extend Schliprs result to prove that SLP solves all search problems in the class NP. Moreover, we do this in a uniform way ...
详细信息
Schlipf (1995) proved that Stable Logic programming (SLP) solves all NP decision problems. We extend Schliprs result to prove that SLP solves all search problems in the class NP. Moreover, we do this in a uniform way as defined in Marek and Truszczynski (1991). Specifically, we show that there is a single DATALOG(-) program P-Trg such that given any Turing machine M, any polynomial p with non-negative integer coefficients and any input a of size n over a fixed alphabet Sigma, there is an extensional database edb(M,p,sigma) such that there is a one-to-one correspondence between the stable models of edb(M,p,sigma) boolean OR P-Trg and the accepting computations of the machine M that reach the final state in at most p(n) steps. Moreover, edb (M,p,sigma) can be computed in polynomial time from p, a and the description of M and the decoding of such accepting computations from its corresponding stable model of edb (M,***) boolean OR P-Trg can be computed in linear time. A similar statement holds for Default Logic with respect to Sigma(2)(P)-search problems.(1).
In this paper we give a short introduction to logic programming approach to knowledge representation and reasoning. The intention is to help the reader to develop a 'feel' for the field's history and some ...
详细信息
In this paper we give a short introduction to logic programming approach to knowledge representation and reasoning. The intention is to help the reader to develop a 'feel' for the field's history and some of its recent developments. The discussion is mainly limited to logic programs under the answerset semantics. For understanding of approaches to logic programming built on well-founded semantics, general theories of argumentation, abductive reasoning, etc., the reader is referred to other publications. (C) 2002 Elsevier Science B.V. All rights reserved.
Many computationally hard problems become tractable if the graph structure underlying the problem instance exhibits small treewidth. A recent approach to put this idea into practice is based on a declarative interface...
详细信息
Many computationally hard problems become tractable if the graph structure underlying the problem instance exhibits small treewidth. A recent approach to put this idea into practice is based on a declarative interface to answer set programming that allows us to specify dynamic programming over tree decompositions in this language, delegating the computation to dedicated solvers. In this article, we prove that this method can be applied to any problem whose fixed-parameter tractability follows from Courcelle's Theorem.
Boolean networks (and more general logic models) are useful frameworks to study signal transduction across multiple pathways. Logic models can be learned from a prior knowledge network structure and multiplex phosphop...
详细信息
Boolean networks (and more general logic models) are useful frameworks to study signal transduction across multiple pathways. Logic models can be learned from a prior knowledge network structure and multiplex phosphoproteomics data. However, most efficient and scalable training methods focus on the comparison of two time-points and assume that the system has reached an early steady state. In this paper, we generalize such a learning procedure to take into account the time series traces of phosphoproteomics data in order to discriminate Boolean networks according to their transient dynamics. To that end, we identify a necessary condition that must be satisfied by the dynamics of a Boolean network to be consistent with a discretized time series trace. Based on this condition, we use answer set programming to compute an over-approximation of the set of Boolean networks which fit best with experimental data and provide the corresponding encodings. Combined with model-checking approaches, we end up with a global learning algorithm. Our approach is able to learn logic models with a true positive rate higher than 78% in two case studies of mammalian signaling networks;for a larger case study, our method provides optimal answers after 7 min of computation. We quantified the gain in our method predictions precision compared to learning approaches based on static data. Finally, as an application, our method proposes erroneous time-points in the time series data with respect to the optimal learned logic models. (C) 2016 Elsevier Ireland Ltd. All rights reserved.
Building Information Modeling (BIM) produces three-dimensional object-oriented models of buildings combining the geometrical information with a wide range of properties about materials, products, safety, to name just ...
详细信息
Building Information Modeling (BIM) produces three-dimensional object-oriented models of buildings combining the geometrical information with a wide range of properties about materials, products, safety, to name just a few. BIM is slowly but inevitably revolutionizing the architecture, engineering, and construction industry. Buildings need to be compliant with regulations about stability, safety, and environmental impact. Manual compliance checking is tedious and error-prone, and amending flaws discovered only at construction time causes huge additional costs and delays. Several tools can check BIM models for conformance with rules/guidelines. For example, Singapore's CORENET e-Submission System checks fire safety. But since the current BIM exchange format only contains basic information about building objects, a separate, ad-hoc model pre-processing is required to determine, for example, evacuation routes. Moreover, they face difficulties in adapting existing built-in rules and/or adding new ones (to cater for building regulations, that can vary not only among countries but also among parts of the same city), if at all possible. We propose the use of logic-based executable formalisms (CLP and Constraint ASP) to couple BIM models with advanced knowledge representation and reasoning capabilities. Previous experience shows that such formalisms can be used to uniformly capture and reason with knowledge (including ambiguity) in a large variety of domains. Additionally, incorporating checking within design tools makes it possible to ensure that models are rule-compliant at every step. This also prevents erroneous designs from having to be (partially) redone, which is also costly and burdensome. To validate our proposal, we implemented a preliminary reasoner under CLP(Q/R) and ASP with constraints and evaluated it with several BIM models.
Cybersecurity is a growing concern in today's society. Security policies have been developed to ensure that data and assets remain protected for legitimate users, but there must be a mechanism to verify that these...
详细信息
Cybersecurity is a growing concern in today's society. Security policies have been developed to ensure that data and assets remain protected for legitimate users, but there must be a mechanism to verify that these policies can be enforced. This paper addresses the verification problem of security policies in role-based access control of enterprise software. Most existing approaches employ traditional logic or procedural programming that tends to involve complex expressions or search with backtrack. These can be time-consuming, and hard to understand, and update, especially for large-scale security verification problems. Declarative programming paradigms such as "answerset" programming have been widely used to alleviate these issues by ways of elegant and flexible modeling for complex search problems. However, solving problems using these paradigms can be challenging due to the nature and limitation of the declarative problem solver. This paper presents an approach to automated security policy verification using answer set programming. In particular, we investigate how the separation of duty security policy in role-based access control can be verified. Our contribution is a modeling approach that maps this verification problem into a graph-coloring problem to facilitate the use of generate-and-test in a declarative problem-solving paradigm. The paper describes a representation model and rules that drive the answerset Solver and illustrates the proposed approach to securing web application software to assist the hiring process in a company.
Efficient omission of symmetric solution candidates is essential for combinatorial problem-solving. Most of the existing approaches are instance-specific and focus on the automatic computation of Symmetry Breaking Con...
详细信息
Efficient omission of symmetric solution candidates is essential for combinatorial problem-solving. Most of the existing approaches are instance-specific and focus on the automatic computation of Symmetry Breaking Constraints (SBCs) for each given problem instance. However, the application of such approaches to large-scale instances or advanced problem encodings might be problematic since the computed SBCs are propositional and, therefore, can neither be meaningfully interpreted nor transferred to other instances. As a result, a time-consuming recomputation of SBCs must be done before every invocation of a solver. To overcome these limitations, we introduce a new model-oriented approach for answer set programming that lifts the SBCs of small problem instances into a set of interpretable first-order constraints using the Inductive Logic programming paradigm. Experiments demonstrate the ability of our framework to learn general constraints from instance-specific SBCs for a collection of combinatorial problems. The obtained results indicate that our approach significantly outperforms a state-of-the-art instance-specific method as well as the direct application of a solver.
Every year about one third of the food production intended for humans gets lost or wasted. This wastefulness of resources leads to the emission of unnecessary greenhouse gas, contributing to global warming and climate...
详细信息
Every year about one third of the food production intended for humans gets lost or wasted. This wastefulness of resources leads to the emission of unnecessary greenhouse gas, contributing to global warming and climate change. The solution proposed by the SORT project is to "recycle'' the surplus of food by reconditioning it into animal feed or fuel for biogas/biomass power plants. In order to maximize the earnings and minimize the costs, several choices must be made during the reconditioning process. Given the extremely complex nature of the process, Decision Support Systems (DSSs) could be helpful to reduce the human effort in decision making. In this paper, we present a DSS for food recycling developed using two approaches for finding the optimal solution: one based on Binary Linear programming (BLP) and the other based on answer set programming (ASP), which outperform our previous approach based on Constraint Logic programming (CLP) on Finite Domains (CLP(FD)). In particular, the BLP and the CLP(FD) approaches are developed in (ECLPSe)-P-i, a Prolog system that interfaces with various state-of-the-art Mathematical and Constraint programming solvers. The ASP approach, instead, is developed in clingo. The three approaches are compared on several synthetic datasets that simulate the operative conditions of the DSS.
Inductive Logic programming (ILP) combines rule-based and statistical artificial intelligence methods, by learning a hypothesis comprising.a set of rules given background knowledge and constraints for the search space...
详细信息
Inductive Logic programming (ILP) combines rule-based and statistical artificial intelligence methods, by learning a hypothesis comprising.a set of rules given background knowledge and constraints for the search space. We focus on extending the XHAIL algorithm for ILP which is based on answer set programming and we evaluate our extensions using the Natural Language Processing application of sentence chunking. With respect to processing natural language, ILP can cater for the constant change in how we use language on a daily basis. At the same time, ILP does not require huge amounts of training examples such as other statistical methods and produces interpretable results, that means a set of rules, which can be analysed and tweaked if necessary. As contributions we extend XHAIL with (i) a pruning mechanism within the hypothesis generalisation algorithm which enables learning from larger datasets, (ii) a better usage of modern solver technology using recently developed optimisation methods, and (iii) a time budget that permits the usage of suboptimal results. We evaluate these improvements on the task of sentence chunking using three datasets from a recent SemEval competition. Results show that our improvements allow for learning on bigger datasets with results that are of similar quality to state-of-the-art Systems on the same task. Moreover, we compare the hypotheses obtained on datasets to gain insights on the structure of each dataset. (c) 2017 Elsevier Ltd. All rights reserved.
暂无评论