An approach to authorization that is based on attributes of the resource requester provides flexibility and scalability that is essential in the context of large distributed systems. logicprogramming provides an eleg...
详细信息
ISBN:
(纸本)3540206426
An approach to authorization that is based on attributes of the resource requester provides flexibility and scalability that is essential in the context of large distributed systems. logicprogramming provides an elegant, expressive, and well-understood framework in which to work with attribute-based authorization policy. We summarize one specific attribute-based authorization framework built on logicprogramming: RT, a family of Role-based Trust-management languages. RT's logicprogramming foundation has facilitated the conception and specification of several extensions that greatly enhance its expressivity with respect to important security concepts such as parameterized roles, thresholds, and separation of duties. After examining language design issues, we consider the problem of assessing authorization policies with respect to vulnerability of resource owners to a variety of security risks due to delegations to other principals, risks such as undesired authorizations and unavailability of critical resources. We summarize analysis techniques for assessing such vulnerabilities.
Planning—in its classical sense—is the problem of finding a sequence of actions that achieves a predefined goal. As such, much of the research in AI planning has been focused on methodologies and i...
ISBN:
(纸本)3540206426
Planning—in its classical sense—is the problem of finding a sequence of actions that achieves a predefined goal. As such, much of the research in AI planning has been focused on methodologies and issues related to the development of efficient planners. To date, several efficient planning systems have been developed (e.g., see [3] for a summary of planners that competed in the internationalconference on Artificial Intelligent Planning and Scheduling). these developments can be attributed to the discovery of good domain-independent heuristics, the use of domain-specific knowledge, and the development of efficient data structures used in the implementation of the planning algorithms. logicprogramming has played a significant role in this line of research, providing a declarative framework for the encoding of different forms of knowledge and its effective use during the planning process [5].
作者:
Benhamou, Belaïd
Domaine univ. S. Jerome Ave. Escadrille Normandie Niemen 13397 Marseille Cedex 20 France
Université d'Artois SP 18 Rue Jean Souvraz F 62307 Lens Cedex France
Many research works had been done in order to define a semantics for logic programs. the well know is the stable model semantics which selects for each program one of its canonical models. the stable models of a logic...
详细信息
logicprogramming has been advocated as a language for system specification, especially for logical behaviours, rules and knowledge. However, modeling problems involving negation, which is quite natural in many cases,...
详细信息
ISBN:
(纸本)3540206426
logicprogramming has been advocated as a language for system specification, especially for logical behaviours, rules and knowledge. However, modeling problems involving negation, which is quite natural in many cases, is somewhat restricted if Prolog is used as the specification/implementation language. these constraints are not related to theory viewpoint, where users can find many different models withtheir respective semantics; they concern practical implementation issues. the negation capabilities supported by current Prolog systems are rather limited, and a correct and complete implementation there is not available. Of all the proposals, constructive negation [1,2] is probably the most promising because it has been proven to be sound and complete [4], and its semantics is fully compatible with Prolog’s.
inductivelogicprogramming (ILP) is an inductive reasoning method based on the first-order predicative logic. this technology is widely used for data mining using symbolic artificial intelligence. ILP searches for a ...
详细信息
ISBN:
(纸本)9783031291258;9783031291265
inductivelogicprogramming (ILP) is an inductive reasoning method based on the first-order predicative logic. this technology is widely used for data mining using symbolic artificial intelligence. ILP searches for a suitable hypothesis that covers positive examples and uncovers negative examples. the searching process requires a lot of execution cost to interpret many given examples for practical problems. In this paper, we propose a new hypothesis search method using particle swarm optimization (PSO). PSO is a meta-heuristic algorithm based on behaviors of particles. In our approach, each particle repeatedly moves from a hypothesis to another hypothesis within a hypothesis space. At that time, some hypotheses are refined based on the value returned by a predefined evaluation function. Since PSO just searches a part of the hypothesis space, it contributes to the speed up of the execution of ILP. In order to demonstrate the effectiveness of our method, we have implemented it on Progol that is one of the ILP systems [6], and then we conducted numerical experiments. the results showed that our method reduced the hypothesis search time compared to another conventional Progol.
We propose the integration of a relational specification framework within a dependent type system capable of verifying complex invariants over the shapes of algebraic datatypes. Our approach is based on the observatio...
详细信息
ISBN:
(纸本)9781450328739
We propose the integration of a relational specification framework within a dependent type system capable of verifying complex invariants over the shapes of algebraic datatypes. Our approach is based on the observation that structural properties of such datatypes can often be naturally expressed as inductively-defined relations over the recursive structure evident in their definitions. By interpreting constructor applications (abstractly) in a relational domain, we can define expressive relational abstractions for a variety of complex data structures, whose structural and shape invariants can be automatically verified. Our specification language also allows for definitions of parametric relations for polymorphic data types that enable highly composable specifications and naturally generalizes to higher-order polymorphic functions. We describe an algorithm that translates relational specifications into a decidable fragment of first-order logicthat can be efficiently discharged by an SMT solver. We have implemented these ideas in a type checker called CATALYST that is incorporated within the MLton SML compiler. Experimental results and case studies indicate that our verification strategy is both practical and effective.
Most existing approaches to targeting high-level software to FPGAs are based on extensions to C and do not map easily to the features and characteristics of modern FPGAs. these include massive parallelism and a variet...
详细信息
ISBN:
(纸本)9781424438914
Most existing approaches to targeting high-level software to FPGAs are based on extensions to C and do not map easily to the features and characteristics of modern FPGAs. these include massive parallelism and a variety of complex IP-blocks (eg. RAMs, DSPs). In this paper we discuss a hardware implementation of SR, a software language with first class concurrency and high-level IPC. We show that the language model can be implemented efficiently on an FPGA, and that it provides a natural means to encapsulate FPGA resources. We compare against a commercial C-based synthesis tool and achieve similar resource usage using a more expressive language.
One recent development in logicprogramming has been the application of abstract interpretation to verify the partial correctness of a logic program with respect to a given set of assertions. One approach to verificat...
详细信息
ISBN:
(纸本)3540206426
One recent development in logicprogramming has been the application of abstract interpretation to verify the partial correctness of a logic program with respect to a given set of assertions. One approach to verification is to apply forward analysis that starts with an initial goal and traces the execution in the direction of the control-flow to approximate the program state at each program point. this is often enough to verify that the assertions hold. the dual approach is to apply backward analysis to propagate properties of the allowable states against the control-flow to infer queries for which the program will not violate any assertion. this paper is a systematic comparison of these two approaches to verification. the paper reports some equivalence results that relate the relative power of various forward and backward analysis frameworks.
Ontologies - providing an explicit schema for underlying data - often serve as background knowledge for machine learning approaches. Similar to ILP methods, concept learning utilizes such ontologies to learn concept e...
详细信息
ISBN:
(数字)9783030974541
ISBN:
(纸本)9783030974541;9783030974534
Ontologies - providing an explicit schema for underlying data - often serve as background knowledge for machine learning approaches. Similar to ILP methods, concept learning utilizes such ontologies to learn concept expressions from examples in a supervised manner. this learning process is usually cast as a search process through the space of ontologically valid concept expressions, guided by heuristics. Such heuristics usually try to balance explorative and exploitative behaviors of the learning algorithms. While exploration ensures a good coverage of the search space, exploitation focuses on those parts of the search space likely to contain accurate concept expressions. However, at their extreme ends, both paradigms are impractical: A totally random explorative approach will only find good solutions by chance, whereas a greedy but myopic, exploitative attempt might easily get trapped in local optima. To combine the advantages of both paradigms, different meta-heuristics have been proposed. In this paper, we examine the Simulated Annealing meta-heuristic and how it can be used to balance the exploration-exploitation trade-off in concept learning. In different experimental settings, we analyse how and where existing concept learning algorithms can benefit from the Simulated Annealing meta-heuristic.
the paper introduces a prototype of an algorithm that creates personalized news articles about IT and technology based on each personal preference for a specific theme, criteria, or element. When provided a specific p...
详细信息
ISBN:
(纸本)9788996865094
the paper introduces a prototype of an algorithm that creates personalized news articles about IT and technology based on each personal preference for a specific theme, criteria, or element. When provided a specific personal preference, the algorithm Custombot analyses the data, derives the most appropriate topic that contains the most elements preferred by a person, and eventually produces a one and only news article on that topic. While processing and analysing data by inductive reasoning, Custombot considers the concepts of news angle and filter bubble. Text segmentation (tokenization) and custom tagging are two of the tasks that are used for the construction of this system, allowing to make custom tags and insert matching information in appropriate places. Its result of customized news article can serve as a new service provided by news organizations to satisfy each consumer's needs, and can also be a stepping stone in expanding the role or increasing the importance of robot journalism in a broad field of journalism.
暂无评论