One of the main limitations of blockchain systems based on Proof of Work is scalability, making them unsuitable for e-commerce and small payments. Currently, one of the principal directions to overcome the scalability...
详细信息
ISBN:
(纸本)9783031042164;9783031042157
One of the main limitations of blockchain systems based on Proof of Work is scalability, making them unsuitable for e-commerce and small payments. Currently, one of the principal directions to overcome the scalability issue is to use the so-called "layer two" solutions, like Lightning Network, where users can open channels and send payments through them. In this paper, we propose a probabilisticlogic model of Lightning Network, and we show how it can be adopted to compute several properties of it. We conduct some experiments to prove the applicability of the model, rather than providing a comprehensive analysis of the network.
Scene graph generation takes an image and derives a graph representation of key objects in the image and their relations. This core computer vision task is often used in autonomous driving, where traditional software ...
详细信息
ISBN:
(纸本)9781450394758
Scene graph generation takes an image and derives a graph representation of key objects in the image and their relations. This core computer vision task is often used in autonomous driving, where traditional software and machine learning (ML) components are used in tandem. However, in such a safety-critical context, valid scene graphs can be further restricted by consistency constraints captured by domain or safety experts. Existing ML approaches for scene graph generation focus exclusively on relation-level accuracy but provide little to no guarantee that consistency constraints are satisfied in the generated scene graphs. In this paper, we aim to complement existing ML-based approaches by a post-processing step using constraint optimization over probabilistic scene graphs that can (1) guarantee that no consistency constraints are violated and (2) improve the overall accuracy of scene graph generation by fixing constraint violations. We evaluate the effectiveness of our approach using well-known, and novel metrics in the context of two popular ML datasets augmented with consistency constraints and two ML-based scene graph generation approaches as baselines.
probabilistic logic programming (PLP) combines logic programs and probabilities. Due to its expressiveness and simplicity, it has been considered as a powerful tool for learning and reasoning in relational domains cha...
详细信息
probabilistic logic programming (PLP) combines logic programs and probabilities. Due to its expressiveness and simplicity, it has been considered as a powerful tool for learning and reasoning in relational domains characterized by uncertainty. Still, learning the parameter and the structure of general PLP is computationally expensive due to the inference cost. We have recently proposed a restriction of the general PLP language called hierarchical PLP (HPLP) in which clauses and predicates are hierarchically organized. HPLPs can be converted into arithmetic circuits or deep neural networks and inference is much cheaper than for general PLP. In this paper we present algorithms for learning both the parameters and the structure of HPLPs from data. We first present an algorithm, called parameter learning for hierarchical probabilisticlogic programs (PHIL) which performs parameter estimation of HPLPs using gradient descent and expectation maximization. We also propose structure learning of hierarchical probabilistic logic programming (SLEAHP), that learns both the structure and the parameters of HPLPs from data. Experiments were performed comparing PHIL and SLEAHP with PLP and Markov logic Networks state-of-the art systems for parameter and structure learning respectively. PHIL was compared with EMBLEM, ProbLog2 and Tuffy and SLEAHP with SLIPCOVER, PROBFOIL+, MLB-BC, MLN-BT and RDN-B. The experiments on five well known datasets show that our algorithms achieve similar and often better accuracies but in a shorter time.
probabilistic logic programming is an effective formalism for encoding problems characterized by uncertainty. Some of these problems may require the optimization of probability values subject to constraints among prob...
详细信息
probabilistic logic programming is an effective formalism for encoding problems characterized by uncertainty. Some of these problems may require the optimization of probability values subject to constraints among probability distributions of random variables. Here, we introduce a new class of probabilisticlogic programs, namely probabilistic optimizable logic programs, and we provide an effective algorithm to find the best assignment to probabilities of random variables, such that a set of constraints is satisfied and an objective function is optimized.
In parameter learning, a partial interpretation most often contains information about only a subset of the parameters in the program. However, standard EM-based algorithms use all interpretations to learn all paramete...
详细信息
ISBN:
(纸本)9783031013331;9783031013324
In parameter learning, a partial interpretation most often contains information about only a subset of the parameters in the program. However, standard EM-based algorithms use all interpretations to learn all parameters, which significantly slows down learning. To tackle this issue, we introduce EMPLiFI, an EM-based parameter learning technique for probabilisticlogic programs, that improves the efficiency of EM by exploiting the rule-based structure of logic programs. In addition, EMPLiFI enables parameter learning of multi-head annotated dis-junctions in ProbLog programs, which was not yet possible in previous methods. Theoretically, we show that EMPLiFI is correct. Empirically, we compare EMPLiFI to LFI-ProbLog and EMBLEM. The results show that EMPLiFI is the most efficient in learning single-head annotated disjunctions. In learning multi-head annotated disjunctions, EMPLiFI is more accurate than EMBLEM, while LFI-ProbLog cannot handle this task.
Composite event recognition (CER) systems process streams of sensor data and infer composite events of interest by means of pattern matching. Data uncertainty is frequent in CER applications and results in erroneous d...
详细信息
Composite event recognition (CER) systems process streams of sensor data and infer composite events of interest by means of pattern matching. Data uncertainty is frequent in CER applications and results in erroneous detection. To support streaming applications, we present oPIECbd, an extension of oPIEC with a bounded memory, leveraging interval duration statistics to resolve memory conflicts. oPIECbd may achieve comparable predictive accuracy to batch reasoning, avoiding the prohibitive cost of such reasoning. Furthermore, the use of interval duration statistics allows oPIECbd to outperform significantly earlier versions of bounded oPIEC. The empirical evaluation demonstrates the efficacy of oPIECbd on a benchmark activity recognition dataset, as well as real data streams from the field of maritime situational awareness. & COPY;2023 Elsevier Inc. All rights reserved.
Activity recognition refers to the detection of temporal combinations of 'low-level' or 'short-term' activities on sensor data. Various types of uncertainty exist in activity recognition systems and th...
详细信息
Activity recognition refers to the detection of temporal combinations of 'low-level' or 'short-term' activities on sensor data. Various types of uncertainty exist in activity recognition systems and this often leads to erroneous detection. Typically, the frameworks aiming to handle uncertainty compute the probability of the occurrence of activities at each time-point. We extend this approach by defining the probability of a maximal interval and the credibility rate for such intervals. We then propose a linear-time algorithm for computing all probabilistic temporal intervals of a given dataset. We evaluate the proposed approach using a benchmark activity recognition dataset, and outline the conditions in which our approach outperforms time-point-based recognition.
CBR systems leverage past experiences to make decisions. Recently, the AI community has taken an interest in making CBR systems explainable. logic-based frameworks make answers straightforward to explain. However, the...
详细信息
ISBN:
(纸本)9783030911003;9783030910990
CBR systems leverage past experiences to make decisions. Recently, the AI community has taken an interest in making CBR systems explainable. logic-based frameworks make answers straightforward to explain. However, they struggle in the face of conflicting information, unlike probabilistic techniques. We show how probabilistic inductive logicprogramming (PILP) can be applied in CBR systems to make transparent decisions combining logic and probabilities. Then, we demonstrate how our approach can be applied in scenarios presenting uncertainty.
Scene graph generation takes an image and derives a graph representation of key objects in the image and their relations. This core computer vision task is often used in autonomous driving, where traditional software ...
详细信息
ISBN:
(纸本)9781450394758
Scene graph generation takes an image and derives a graph representation of key objects in the image and their relations. This core computer vision task is often used in autonomous driving, where traditional software and machine learning (ML) components are used in tandem. However, in such a safety-critical context, valid scene graphs can be further restricted by consistency constraints captured by domain or safety experts. Existing ML approaches for scene graph generation focus exclusively on relation-level accuracy but provide little to no guarantee that consistency constraints are satisfied in the generated scene graphs. In this paper, we aim to complement existing ML-based approaches by a post-processing step using constraint optimization over probabilistic scene graphs that can (1) guarantee that no consistency constraints are violated and (2) improve the overall accuracy of scene graph generation by fixing constraint violations. We evaluate the effectiveness of our approach using well-known, and novel metrics in the context of two popular ML datasets augmented with consistency constraints and two ML-based scene graph generation approaches as baselines.
The application of deep learning models to increasingly complex contexts has led to a rise in the complexity of the models themselves. Due to this, there is an increase in the number of hyper-parameters (HPs) to be se...
详细信息
The application of deep learning models to increasingly complex contexts has led to a rise in the complexity of the models themselves. Due to this, there is an increase in the number of hyper-parameters (HPs) to be set and Hyper-Parameter Optimization (HPO) algorithms occupy a fundamental role in deep learning. Bayesian Optimization (BO) is the state-of-the-art of HPO for deep learning models. BO keeps track of past results and uses them to build a probabilistic model, building a probability density of HPs. This work aims to improve BO applied to Deep Neural Networks (DNNs) by an analysis of the results of the network on training and validation sets. This analysis is obtained by applying symbolic tuning rules, implemented in probabilistic logic programming (PLP). The resulting system, called Symbolic DNN-Tuner, logically evaluates the results obtained from the training and the validation phase and, by applying symbolic tuning rules, fixes the network architecture, and its HPs, leading to improved performance. In this paper, we present the general system and its implementation. We also show its graphical interface and a simple example of execution. (C) 2021 The Author(s). Published by Elsevier B.V.
暂无评论