Proving the correctness of distributed or concurrent algorithms is a complex process. Errors in the reasoning are hard to find, calling for computer-checked proof systems like Coq or TLA+. To use these tools, sequenti...
详细信息
Proving the correctness of distributed or concurrent algorithms is a complex process. Errors in the reasoning are hard to find, calling for computer-checked proof systems like Coq or TLA+. To use these tools, sequential specifications of base objects are required to build modular proofs by composition. Unfortunately, many concurrent objects lack a sequential specification. This article describes a method to transform any task, a specification of a concurrent one-shot distributed problem, into a sequential specification involving two calls, set and get. This enables designers to compose proofs, facilitating modular computer -checked proofs of algorithms built using tasks and sequential objects as building blocks. Moir & Anderson implementation of renaming using splitters, wait-free concurrent objects, is an algorithm designed by composition, but it is not modular. Using our transformation, a modular description of the algorithm is given in TLA+ and mechanically verified using the TLA+ Proof System. As far as we know, this is the first time this algorithm is mechanically verified. (c) 2023 Elsevier Inc. All rights reserved.
Nowadays, high resolution aerial or satellite photos can be taken, which can be appropriate for further analyses. Photos can be taken using a satellite, aircraft, a robot drone or even a multicopter. However, generall...
详细信息
ISBN:
(纸本)9781728156255
Nowadays, high resolution aerial or satellite photos can be taken, which can be appropriate for further analyses. Photos can be taken using a satellite, aircraft, a robot drone or even a multicopter. However, generally speaking, every solution results in similarly high-resolution images. Analyses can be conducted on the resulting aerial photos in the light of various criteria, which conditions can be determined by the customer. One such task can be the enumeration of pools built in areas with detached houses, which can constitute the basis for specific data. One of the most obvious automated procedures for the detection of objects is to use neural networks, which utilizing properly chosen teaching samples is subsequently able to decide about individual image parts whether they contain the desired object or not. In spite of a large number of teaching samples, a neural network may give a false-positive or a false-negative result in the image passed over for analysis. These mistakes can occur from the fact that the teaching samples and the photos of the areas examined were not taken in the same season, in the same time of day or that they do not show pools having the same maintenance level. Instead of the further teaching of the network, the mentioned mistakes can be eliminated using classical image analysis methods in a way that the results of individual procedures are fusioned. Such methods may be color coding analysis or Fuzzy-logic analysis based on some property in the image. Fuzzy-logic can be used to detect visual objects if there is a definite, distinctive property, like color range or color depth.
Proving correctness of distributed or concurrent algorithms is a mind-challenging and complex process. Slight errors in the reasoning are difficult to find, calling for computer-checked proof systems. In order to buil...
详细信息
ISBN:
(数字)9783030349929
ISBN:
(纸本)9783030349929;9783030349912
Proving correctness of distributed or concurrent algorithms is a mind-challenging and complex process. Slight errors in the reasoning are difficult to find, calling for computer-checked proof systems. In order to build computer-checked proofs with usual tools, such as Coq or TLA+, having sequential specifications of all base objects that are used as building blocks in a given algorithm is a requisite to provide a modular proof built by composition. Alas, many concurrent objects do not have a sequential specification. This article describes a systematic method to transform any task, a specification method that captures concurrent one-shot distributed problems, into a sequential specification involving two calls, set and get. This transformation allows system designers to compose proofs, thus providing a framework for modular computer-checked proofs of algorithms designed using tasks and sequential objects as building blocks. The Moir&Anderson implementation of renaming using splitters is an iconic example of such algorithms designed by composition.
Nowadays, high resolution aerial or satellite photos can be taken, which can be appropriate for further analyses. Photos can be taken using a satellite, aircraft, a robot drone or even a multicopter. However, generall...
详细信息
ISBN:
(数字)9781728156255
ISBN:
(纸本)9781728156262
Nowadays, high resolution aerial or satellite photos can be taken, which can be appropriate for further analyses. Photos can be taken using a satellite, aircraft, a robot drone or even a multicopter. However, generally speaking, every solution results in similarly high-resolution images. Analyses can be conducted on the resulting aerial photos in the light of various criteria, which conditions can be determined by the customer. One such task can be the enumeration of pools built in areas with detached houses, which can constitute the basis for specific data. One of the most obvious automated procedures for the detection of objects is to use neural networks, which utilizing properly chosen teaching samples is subsequently able to decide about individual image parts whether they contain the desired object or not. In spite of a large number of teaching samples, a neural network may give a false-positive or a false-negative result in the image passed over for analysis. These mistakes can occur from the fact that the teaching samples and the photos of the areas examined were not taken in the same season, in the same time of day or that they do not show pools having the same maintenance level. Instead of the further teaching of the network, the mentioned mistakes can be eliminated using classical image analysis methods in a way that the results of individual procedures are fusioned. Such methods may be color coding analysis or Fuzzy-logic analysis based on some property in the image. Fuzzy-logic can be used to detect visual objects if there is a definite, distinctive property, like color range or color depth.
concurrent languages such as Perl 6 fully leverage the power of current multi-core and hyper-threaded computer architectures, and they include easy ways of automatically parallelizing code. However, to achieve more co...
详细信息
ISBN:
(纸本)9783030003500;9783030003494
concurrent languages such as Perl 6 fully leverage the power of current multi-core and hyper-threaded computer architectures, and they include easy ways of automatically parallelizing code. However, to achieve more computational capability by using all threads and cores, algorithms need to be redesigned to be run in a concurrent environment;in particular, the use of a reactive, fully functional patterns need to turn the algorithm into a series of stateless steps, with simple functions that receive all the context and map it to the next stage. In this paper, we are going to analyze different versions of these stateless, reactive architectures applied to evolutionary algorithms, assessing how they interact with the characteristics of the evolutionary algorithm itself and show how they improve the scaling behavior and performance. We will use the Perl 6 language, which is a modern, concurrent language that was released recently and is still under very active development.
A concurrent computing course is filled with challenges for upperlevel programming students. Understanding concurrency provides deeper insight into many modern computing and programming language behaviors, but the sub...
详细信息
ISBN:
(纸本)9798400701382
A concurrent computing course is filled with challenges for upperlevel programming students. Understanding concurrency provides deeper insight into many modern computing and programming language behaviors, but the subject matter can be difficult even for relatively proficient students. It can be a challenge to help students navigate and understand these unfamiliar topics. While there is a difference in general programming familiarity, teaching this novel material is not unlike some challenges faced when engaging introductory students with first programming concepts. In this work, we explore the use of analogy by students while learning a novel programming methodology. We investigate perceptions of the utility of analogy and creation of analogies in the concurrent course. We also examine perceptions of analogy value across students' computing education and factors which impacted their use or disuse of provided or student-generated analogies. This exploration suggests that pedagogical analogy design can be memorable and significant for student understanding. It further suggests that analogies inherent in concept naming and foundational examples may have even greater salience. While not all students create analogies, those that do share both unique examples and additions to existing examples that helped them understand core concepts. Students had mixed responses on whether analogy as a tool was used in their lower-level courses. Despite this, most found analogies to be useful, with a majority finding them even more useful in upper-level programming courses.
In this article we present an algorithm for a high performance, unbounded, portable, multi-producer/multi-consumer, lock-free FIFO (first-in first-out) queue. Aside from its competitive performance on current hardware...
详细信息
In this article we present an algorithm for a high performance, unbounded, portable, multi-producer/multi-consumer, lock-free FIFO (first-in first-out) queue. Aside from its competitive performance on current hardware, it is further characterized by its integrated memory reclamation mechanism, which is able to reliably and deterministically de-allocate nodes as soon as the final operation with a reference has concluded, similar to reference counting. This differentiates our approach from most other lock-free data structures, which usually require external (generic) memory reclamation or garbage collection mechanisms such as hazard pointers. Our deterministic memory reclamation mechanism completely prevents the build up of memory awaiting reclamation and is hence very memory efficient, yet it does not introduce any substantial performance overhead. By utilizing concrete knowledge about the internal structure and access patterns of our queue, we are able to construct and constrain the reclamation mechanism in such a way that keeps the overhead for memory management almost entirely out of the common fast path. The presented algorithm is portable to all modern 64-bit processor architectures, as it only relies on the commonly available and lock-free atomic synchronization primitives compare-and-swap and fetch-and-add.
We present XIndex, which is a concurrent index library and designed for fast queries. It includes a concurrent ordered index (XIndex-R) and a concurrent hash index (XIndex-H). Similar to a recent proposal of the learn...
详细信息
We present XIndex, which is a concurrent index library and designed for fast queries. It includes a concurrent ordered index (XIndex-R) and a concurrent hash index (XIndex-H). Similar to a recent proposal of the learned index, the indexes in XIndex use learned models to optimize index efficiency. Compared with the learned index, for the ordered index, XIndex-R is able to handle concurrent writes effectively and adapts its structure according to runtime workload characteristics. For the hash index, XIndex-H is able to avoid the resize operation blocking concurrent writes. Furthermore, the indexes in XIndex can index string keys much more efficiently than the learned index. We demonstrate the advantages of XIndex with YCSB, TPC-C (KV), which is a TPC-C-inspired benchmark for key-value stores, and micro-benchmarks. Compared with ordered indexes of Masstree and Wormhole, XIndex-R achieves up to 3.2x and 4.4x performance improvement on a 24-core machine. Compared with hash indexes of Intel TBB HashMap, XIndex-H achieves up to 3.1x speedup. The performance further improves by 91% after adding the optimizations on indexing string keys. The library is open-sourced.(1)
In the last two decades, great attention has been devoted to the design of non-blocking and linearizable data structures, which enable exploiting the scaled-up degree of parallelism in off-the-shelf shared-memory mult...
详细信息
In the last two decades, great attention has been devoted to the design of non-blocking and linearizable data structures, which enable exploiting the scaled-up degree of parallelism in off-the-shelf shared-memory multi-core machines. In this context, priority queues are highly challenging. Indeed, concurrent attempts to extract the highest-priority item are prone to create detrimental thread conflicts that lead to abort/retry of the operations. In this article, we present the first priority queue that jointly provides: (i) lock-freedom and linearizability;(ii) conflict resiliency against concurrent extractions;(iii) adaptiveness to different contention profiles;and (iv) amortized constant-time access for both insertions and extractions. Beyond presenting our solution, we also provide proof of its correctness based on an assertional approach. Also, we present an experimental study on a 64-CPU machine, showing that our proposal provides performance improvements over state-of-the-art non-blocking priority queues.
The size of a data structure (i.e., the number of elements in it) is a widely used property of a data set. However, for concurrent programs, obtaining a correct size efficiently is non-trivial. In fact, the literature...
详细信息
The size of a data structure (i.e., the number of elements in it) is a widely used property of a data set. However, for concurrent programs, obtaining a correct size efficiently is non-trivial. In fact, the literature does not offer a mechanism to obtain a correct ( linearizable) size of a concurrent data set without resorting to inefficient solutions, such as taking a full snapshot of the data structure to count the elements, or acquiring one global lock in all update and size operations. This paper presents a methodology for adding a concurrent linearizable size operation to sets and dictionaries with a relatively low performance overhead. Theoretically, the proposed size operation is wait-free with asymptotic complexity linear in the number of threads (independently of data-structure size). Practically, we evaluated the performance overhead by adding size to various concurrent data structures in JavaDa skip list, a hash table and a tree. The proposed linearizable size operation executes faster by orders of magnitude compared to the existing option of taking a snapshot, while incurring a throughput loss of 1% - 20% on the original data structure's operations.
暂无评论