The intersection non-emptiness problem of regular languages is one of the most classical and fundamental decision problems in formal language theory, which plays an important role in many areas of computer science. In...
详细信息
The intersection non-emptiness problem of regular languages is one of the most classical and fundamental decision problems in formal language theory, which plays an important role in many areas of computer science. In this paper, we propose five incremental algorithms for checking the intersection of regular expressions, which avoid explicitly constructing automata and use coordinatewise quasi-orders among states to reduce the search space. We conduct experiments and compare the algorithms with seven state-of-the-art tools. The results show advantages of the algorithms over existing methods in terms of efficiency on two real-world benchmarks.
Attribute reduction from decision tables has been much focused in recent years in which the incremental methods of the tradition rough set and extended models are mostly used for adding, removing, or updating the obje...
详细信息
Attribute reduction from decision tables has been much focused in recent years in which the incremental methods of the tradition rough set and extended models are mostly used for adding, removing, or updating the object or attribute set. However, when dealing with the dynamic decision tables, the existing incremental methods do not recalculate information which has been added into the decision table. In this article, we propose some new incremental methods using the hybrid filter-wrapper with fuzzy partition distance on fuzzy rough set. Experimental results indicate that the proposed algorithms decrease significantly the cardinality of reduct as well as achieve higher accuracy than the other filter incremental methods such as IV-FS-FRS-2, IARM, ASS-IAR, IFSA, and IFSD.
Several classic problems in graph processing and computational geometry are solved via incremental algorithms, which split computation into a series of small tasks acting on shared state, which gets updated progressiv...
详细信息
ISBN:
(纸本)9781450361842
Several classic problems in graph processing and computational geometry are solved via incremental algorithms, which split computation into a series of small tasks acting on shared state, which gets updated progressively. While the sequential variant of such algorithms usually specifies a fixed (but sometimes random) order in which the tasks should be performed, a standard approach to parallelizing such algorithms is to relax this constraint to allow for out-of-order parallel execution. This is the case for parallel implementations of Dijkstra's single-source shortest-paths (SSSP) algorithm, and for parallel Delaunay mesh triangulation. While many software frameworks parallelize incremental computation in this way, it is still not well understood whether this relaxed ordering approach can still provide any complexity guarantees. In this paper, we address this problem, and analyze the efficiency guarantees provided by a range of incremental algorithms when parallelized via relaxed schedulers. We show that, for algorithms such as Delaunay mesh triangulation and sorting by insertion, schedulers with a maximum relaxation factor of k in terms of the maximum priority inversion allowed will introduce a maximum amount of wasted work of O(logn poly (k)), where n is the number of tasks to be executed. For SSSP, we show that the additional work is O( poly (k) d(max)/w(min)), where d(max) is the maximum distance between two nodes, and w(min) is the minimum such distance. In practical settings where n >> k, this suggests that the overheads of relaxation will be outweighed by the improved scalability of the relaxed scheduler. On the negative side, we provide lower bounds showing that certain algorithms will inherently incur a non-trivial amount of wasted work due to scheduler relaxation, even for relatively benign relaxed schedulers.
Majority of the autonomous robot exploration strategies operate by iteratively extracting the boundary between the mapped open space and unexplored space, frontiers, and sending the robot towards the "best" ...
详细信息
Majority of the autonomous robot exploration strategies operate by iteratively extracting the boundary between the mapped open space and unexplored space, frontiers, and sending the robot towards the "best" frontier. Traditional approaches process the entire map to retrieve the frontier information at each decision step. This operation however is not scalable to large map sizes and high decision frequencies. In this article, a computationally efficient incremental approach, Safe and Reachable Frontier Detection (SRFD), that processes locally updated map data to generate only the safe and reachable (i.e. valid) frontier information is introduced. This is achieved by solving the two sub-problems of a) incrementally updating a database of boundary contours between mapped-free and unknown cells that are safe for robot and b) incrementally identifying the reachability of the contours in the database. Only the reachable boundary contours are extracted as frontiers. Experimental evaluation on real world data sets validate that the proposed incremental update strategy provides a significant improvement in execution time while maintaining the global accuracy of frontier generation. The low computational footprint of proposed frontier generation approach provides the opportunity for exploration strategies to process frontier information at much higher frequencies which could be used to generate more efficient exploration strategies. (C) 2015 Elsevier B.V. All rights reserved.
Fuzzy model checking, also called multi-valued model checking, has proved to be an effective technique in verifying properties of fuzzy systems. One important issue with fuzzy model checking, is that a model adopted i...
详细信息
Fuzzy model checking, also called multi-valued model checking, has proved to be an effective technique in verifying properties of fuzzy systems. One important issue with fuzzy model checking, is that a model adopted in fuzzy model checking is frequently updated with small changes, and it is too costly to run a model-checking algorithm from scratch in response to every update. To address the issue, in this paper, we consider the incremental model-checking approach for fuzzy systems by making maximal use of previous model checking results or in other words, by minimizing unnecessary recomputation. The models of our study are fuzzy Kripke structures, which are a fuzzy counterpart of Kripke structures and used to describe fuzzy systems, while the properties of fuzzy systems are expressed using fuzzy computation tree logic, a fuzzy temporal logic derived from computation tree logic. The focus of the paper is on how to design incremental model-checking algorithms for two until-formulas which characterize the maximal or dually minimum constrained reachability properties with respect to fuzzy Kripke structures under transition insertions or deletions but not both. The feasibility of our approach is illustrated by an example arising from the path planning problem of mobile robots.
Simultaneously processing several large blocks of streaming data is a computationally expensive problem. Based on the incremental singular value decomposition algorithm, we propose a new procedure for calculating the ...
详细信息
Simultaneously processing several large blocks of streaming data is a computationally expensive problem. Based on the incremental singular value decomposition algorithm, we propose a new procedure for calculating the factorization of the multiblock redundancy matrix M, which makes the multiblock method more fast and efficient when analyzing large streaming data and high-dimensional dense matrices. The procedure transforms a big data problem into a small one by processing small high-dimensional matrices where variables are in rows. Numerical experiments illustrate the accuracy and performance of the incremental solution for analyzing streaming multiblock redundancy data. The experiments demonstrate that the incremental algorithm may decompose a large matrix with a 75% reduction in execution time. It is more efficient to first partition the matrix M and then decompose it with the incremental algorithm than to decompose the entire matrix M using the standard singular value decomposition algorithm.
We introduce the problem of adapting a stable matching to forced and forbidden pairs. Given a stable matching M-1, a set Q of forced pairs, and a set P of forbidden pairs, we want to find a stable matching that includ...
详细信息
We introduce the problem of adapting a stable matching to forced and forbidden pairs. Given a stable matching M-1, a set Q of forced pairs, and a set P of forbidden pairs, we want to find a stable matching that includes all pairs from Q, no pair from P, and is as close as possible to M-1. We study this problem in four classic stable matching settings: Stable Roommates (with Ties) and Stable Marriage (with Ties). Our main contribution is a polynomial-time algorithm, based on the theory of rotations, for adapting Stable Roommates matchings to forced pairs. In contrast, we show that the same problem for forbidden pairs is NP-hard. However, our polynomial-time algorithm for forced pairs can be extended to a fixed-parameter tractable algorithm with respect to the number of forbidden pairs. Moreover, we study the setting where preferences contain ties: Some of our algorithmic results can be extended while other problems become intractable. (c) 2024 The Author(s). Published by Elsevier Inc. This is an open access article under the CC BY license (http://creativecommons .org /licenses /by /4 .0/).
As an effective tool for data analysis, formal concept analysis (FCA) is widely used in software engineering and machine learning. The construction of concept lattice is a key step of the FCA. How to effectively to up...
详细信息
As an effective tool for data analysis, formal concept analysis (FCA) is widely used in software engineering and machine learning. The construction of concept lattice is a key step of the FCA. How to effectively to update the concept lattice is still an open, interesting and important issue. To resolve this problem, an incremental algorithm for concept lattice on image structure similarity (SsimAddExten) was presented. The proposed method mapped each knowledge class on the conceptlattice into a graphic, when a new object was added or deleted in a knowledge class, the boundary profile of graphic will be changed, the graphic edge structure similarity was introduced as the calculation index of the change degree before and after the knowledge, and the concept lattice will be updated on the basis of the index. We performed experiments to test SsimAddExtent, whose computational efficiency obtains obvious advantages over mainstream methods on almost all test points, especially on the data set with a large number of attributes. But, its complexity is not reduced compared with mainstream methods. Both theoretical analysis and performance test show SsimAddExtent algorithm is better choice when we apply the FCA to large scale data or non-sparse data.
We consider the problem of a Parameter Server (PS) that wishes to learn a model that fits data distributed on the nodes of a graph. We focus on Federated Learning (FL) as a canonical application. One of the main chall...
详细信息
We consider the problem of a Parameter Server (PS) that wishes to learn a model that fits data distributed on the nodes of a graph. We focus on Federated Learning (FL) as a canonical application. One of the main challenges of FL is the communication bottleneck between the nodes and the parameter server. A popular solution in the literature is to allow each node to do several local updates on the model in each iteration before sending it back to the PS. While this mitigates the communication bottleneck, the statistical heterogeneity of the data owned by the different nodes has proven to delay convergence and bias the model. In this work, we study random walk (RW) learning algorithms for tackling the communication and data heterogeneity problems. The main idea is to leverage available direct connections among the nodes themselves, which are typically "cheaper" than the communication to the PS. In a random walk, the model is thought of as a "baton" that is passed from a node to one of its neighbors after being updated in each iteration. The challenge in designing the RW is the data hetErogeneity and the uncertainty about the data distributions. Ideally, we would want to visit more often nodes that hold more informative data. We cast this problem as a sleeping multi-armed bandit (MAB) to design near-optimal node sampling strategy that achieves a variance reduced gradient estimates and approaches sub-linearly the optimal sampling strategy. Based on this framework, we present an adaptive random walk learning algorithm. We provide theoretical guarantees on its convergence. Our numerical results validate our theoretical findings and show that our algorithm outperforms existing random walk algorithms.
Nowadays there is a great interest in the visualization of property graphs to make their navigation, inspection, and visual analysis easier. However, property graphs can be quite large and their rendering on web brows...
详细信息
Nowadays there is a great interest in the visualization of property graphs to make their navigation, inspection, and visual analysis easier. However, property graphs can be quite large and their rendering on web browsers can lead to a dark cloud of points that is difficult to visually explore. With the aim of reducing the size of the visualized graph, several approaches have been proposed for substituting clusters of related vertices with aggregated meta-nodes and introducing meta-edges among them, but they usually consider the graph in main-memory and do not adopt efficient data structures for extracting parts of it from the disk. The purpose of this paper is to optimize the preparation of the graph to be visualized according to a certain resolution level by introducing refined data structures and specifically tailored algorithms. By means of them, the rendering time is reduced when changing the current visualization through zoom-in, zoom-out, and related operations. Starting from a cluster hierarchy that represents the possible aggregations of graph nodes, in the paper we characterize a visualization according to a horizontal slice of the hierarchy and propose indexing structures and incremental algorithms for quickly passing to a new visualization with minimal changes of the current one. In this process, we ensure a consistent and efficient aggregation of addictive properties associated with nodes and edges. An extensive experimental analysis has been conducted to assess the quality of the proposed solution.
暂无评论