The paper deals with the questions of how to develop the automated planning systems that are fast enough to be used in real-time management of supply networks, considering the manual plan corrections by the users. Sev...
详细信息
ISBN:
(纸本)9783030278786;9783030278779
The paper deals with the questions of how to develop the automated planning systems that are fast enough to be used in real-time management of supply networks, considering the manual plan corrections by the users. Several practical situations and planning system use cases are considered. The paper proposes several methods that allow the increase of the data processing speed in practical cases. The methods include parallel data processing, dynamic control of the solutions space depth search, self-regulation of the system behavior based of the specifics of the data processed.
real-time scheduling and locking protocols are fundamental facilities to construct time-critical systems. For parallelreal-time tasks, predictable locking protocols are required when concurrent sub-jobs mutually excl...
详细信息
High penetration of distributed generation (DG) sources into a decentralized power system causes several disturbances, making the monitoring and operation control of the system complicated. Moreover, because of being ...
High penetration of distributed generation (DG) sources into a decentralized power system causes several disturbances, making the monitoring and operation control of the system complicated. Moreover, because of being passive, modern DG systems are unable to detect and inform about these disturbances related to power quality in an intelligent approach. This paper proposed an intelligent and novel technique, capable of making real-time decisions on the occurrence of different DG events such as islanding, capacitor switching, unsymmetrical faults, load switching, and loss of parallel feeder and distinguishing these events from the normal mode of operation. This event classification technique was designed to diagnose the distinctive pattern of the time-domain signal representing a measured electrical parameter, like the voltage, at DG point of common coupling (PCC) during such events. Then different power system events were classified into their root causes using long short-term memory (LSTM), which is a deep learning algorithm for time sequence to label classification. A total of 1100 events showcasing islanding, faults, and other DG events were generated based on the model of a smart distributed generation system using a MATLAB/Simulink environment. Classifier performance was calculated using 5-fold cross-validation. The genetic algorithm (GA) was used to determine the optimum value of classification hyper-parameters and the best combination of features. The simulation results indicated that the events were classified with high precision and specificity with ten cycles of occurrences while achieving a 99.17% validation accuracy. The performance of the proposed classification technique does not degrade with the presence of noise in test data, multiple DG sources in the model, and inclusion of motor starting event in training samples.
In the past few years, data continuously increase and in various forms called Big Data. Besides, data analytics play an important role more and more in most organizations. From these reasons, many efficiently distribu...
详细信息
ISBN:
(纸本)9781728116518
In the past few years, data continuously increase and in various forms called Big Data. Besides, data analytics play an important role more and more in most organizations. From these reasons, many efficiently distributed messaging systems have been introducing to handle Big Data in real-time. However, choosing the appropriate and efficient methods and tools to transfer Big Data is still challenging. Therefore, this paper purposes of comparing the architecture, performance, and resource utilization between Apache Kafka which is one of the favorite tools for Big Data and Apache Pulsar which is similar to Kafka and become one of the latest tools for big data. After we implemented both systems in the same environment, the results show that Pulsar outperforms in throughput, latency, and average resource utilization especially when the size of messages is small (such as 1 KB and 1MB).
Modern data generation is enormous;we now capture events at increasingly fine granularity, and require processing at rates approaching real-time. For graph analytics, this explosion in data volumes and processing dema...
详细信息
ISBN:
(纸本)9781728112466
Modern data generation is enormous;we now capture events at increasingly fine granularity, and require processing at rates approaching real-time. For graph analytics, this explosion in data volumes and processing demands has not been matched by improved algorithmic or infrastructure techniques. Instead of exploring solutions to keep up with the velocity of the generated data, most of today's systems focus on analyzing individually built historic snapshots. Modern graph analytics pipelines must evolve to become viable at massive scale, and move away from static, post-processing scenarios to support on-line analysis. This paper presents our progress towards a system that analyzes dynamic incremental graphs, responsive at single-change granularity. We present an algorithmic structure using principles of recursive updates and monotonic convergence, and a set of incremental graph algorithms that can be implemented based on this structure. We also present the required middleware to support graph analytics at fine, event-level granularity. We envision that graph topology changes are processed asynchronously, concurrently, and independently (without shared state), converging an algorithm's state (e.g. single-source shortest path distances, connectivity analysis labeling) to its deterministic answer. The expected long-term impact of this work is to enable a transition away from offfine graph analytics, allowing knowledge to be extracted from networked systems in real-time.
Query processing over data stream has attracted much attention in real-time applications. While many efforts have been paid for query processing of data streams in distributed environment, no previous study focused on...
详细信息
ISBN:
(纸本)9783030185909;9783030185893
Query processing over data stream has attracted much attention in real-time applications. While many efforts have been paid for query processing of data streams in distributed environment, no previous study focused on multiple-query optimization. To address this problem, we propose EsperDist, a distributed query engine for multiple-query optimization over data stream. EsperDist can significant reduce the overhead of network transmission and memory usage by reusing operators in the query plan. Moreover, EsperDist also makes best effort to minimize the query cost so as to avoid resource bottle neck in a single machine. In this demo, we will present the architecture and work-flow of EsperDist using datasets collected from real world applications. We also propose a user-friendly to monitor query results and interact with the system in realtime.
This paper aims to analyze how innovative Artificial Intelligence (AI) systems (Voiceitt®) for non-standard speech recognition may revolutionize Augmentative Alternative Communication (AAC) technology for people ...
详细信息
This paper presents a study of the adaptation of a Non-Linear Iterative Partial Least Squares (NIPALS) algorithm applied to Hyperspectral Imaging to a Massively parallel Processor Array manycore architecture, which as...
详细信息
This paper presents a study of the adaptation of a Non-Linear Iterative Partial Least Squares (NIPALS) algorithm applied to Hyperspectral Imaging to a Massively parallel Processor Array manycore architecture, which assembles 256 cores distributed over 16 clusters. This work aims at optimizing the internal communications of the platform to achieve real-time processing of large data volumes with limited computational resources and memory bandwidth. As hyperspectral images are composed of extensive volumes of spectral information, real-time requirements, which are upper-bounded by the image capture rate of the hyperspectral sensor, are a challenging objective. To address this issue, the image size is usually reduced prior to the processing phase, which is itself a computationally intensive task. Consequently, this paper proposes an analysis of the intrinsic parallelism and the data dependency within the NIPALS algorithm and its subsequent implementation on a manycore architecture. Furthermore, this implementation has been validated against three hyperspectral images extracted from both remote sensing and medical datasets. As a result, an average speedup of 17x has been achieved when compared to the sequential version. Finally, this approach has been compared with other state-of-the-art implementations, outperforming them in terms of performance.
Non-determinism in a concurrent or distributed setting may lead to many different runs or executions of a program. This paper presents a method to reproduce a specific run for non-deterministic actor or active object ...
Non-determinism in a concurrent or distributed setting may lead to many different runs or executions of a program. This paper presents a method to reproduce a specific run for non-deterministic actor or active object systems. The method is based on recording traces of events reflecting local transitions at so-called stable states during execution; i.e., states in which local execution depends on interaction with the environment. The paper formalizes trace recording and replay for a basic active object language, to show that such local traces suffice to obtain global reproducibility of runs; during replay different objects may operate fairly independently of each other and in parallel, yet a program under replay has guaranteed deterministic outcome. We then show that the method extends to the other forms of non-determinism as found in richer active object languages. Following the proposed method, we have implemented a tool to record and replay runs, and to visualize the communication and scheduling decisions of a recorded run, for real-time ABS, a formally defined, rich active object language for modeling timed, resource-aware behavior in distributedsystems.
In the context of high-dimensional data state-of-the-art methods for constraint-based causal structure learning, such as the PC algorithm, are limited in their application through their worst case exponential computat...
详细信息
暂无评论