In the realm of industrial automation and control, the advent of Decentralized Persistent Identifiers (dPID) presents a novel paradigm for enhancing data identification and management. Traditional centralized and fede...
详细信息
In the realm of industrial automation and control, the advent of Decentralized Persistent Identifiers (dPID) presents a novel paradigm for enhancing data identification and management. Traditional centralized and federated identification systems often struggle with scalability, resilience, and interoperability challenges, particularly in complex, distributed manufacturing environments. This paper explores the potential of dPID technology to address these issues by aligning with the FAIR principles-findability, accessibility, interoperability, and reusability. We introduce dPID as a transformative solution for the persistent identification, versioning, storage, and management of assets within modern industrial settings. Through dPID, assets such as machines and equipment gain unique, immutable identifiers, facilitating improved tracking, maintenance, and operational efficiency. We present scenarios demonstrating dPID's application in asset management, from real-time tracking and preventive maintenance to enhancing supply chain transparency and enabling seamless interoperability across diverse systems and platforms. By leveraging decentralized technologies, dPID offers a promising approach to achieving a more resilient, efficient, and transparent industrial ecosystem. Copyright (c) 2024 The Authors. This is an open access article under the CC BY-NC-ND license (https://***/licenses/by-nc-nd/4.0/)
This paper examines how co-locating multiple VMs on a single physical server with shared storage impacts I/O performance, specifically focusing on latency of I/O operations, and the overall throughput. We introduce a ...
详细信息
parallel computing and distributed computing are the popular terminologies of scheduling. With advancement in technology, systems have become much more compact and fast and need of parallelization plays a major role f...
详细信息
ISBN:
(纸本)9783031368042;9783031368059
parallel computing and distributed computing are the popular terminologies of scheduling. With advancement in technology, systems have become much more compact and fast and need of parallelization plays a major role for this compaction. Wireless computing is also a common concept associated with each new development. Scheduling of tasks has always been a challenging area and is an NP-complete problem. Moreover, when it comes to wireless distributed computing, reliable scheduling plays an important role in order to complete a task in a wireless distributed system. This work proposes an algorithm to dynamically schedule tasks on heterogeneous processors within a wireless distributed computing system. A lot of heuristics, meta-heuristics & genetics have been used earlier with scheduling strategies. However, most of them haven't taken reliability into account before scheduling. Here a heuristic that deals with reliable scheduling is considered. The scheduler also works within an environment which has dynamically changing resources and adapts itself to changing system resources. The testing was carried out with up to 200 tasks being scheduled while testing in a realtime wireless distributed environment. Experiments have shown that the algorithm outperforms the other strategies and can achieve a better reliability along with no increase in make-span, in spite of wireless nodes.
In this dissertation, a real-time, analytic and recursive multivariate state-estimation algorithm is developed for time-invariant, time-varying and nonlinear dynamical systems. Unlike Gaussian based state-estimation a...
详细信息
In this dissertation, a real-time, analytic and recursive multivariate state-estimation algorithm is developed for time-invariant, time-varying and nonlinear dynamical systems. Unlike Gaussian based state-estimation algorithms, the proposed state-estimation algorithm uses Cauchy random variables to model the uncertainties in the process and measurement functions. For this reason, it is referred to as the multivariate Cauchy estimator (MCE). The MCE uses a characteristic function representation of the conditional probability density function of the system state vector, given the measurement history, which generates the conditional mean and covariance estimates of the system state vector at each estimation step. The characteristic function of the MCE is enhanced in this dissertation from its previous form by an innovative, computationally tractable, and reduced structure. In particular, the backward recursive, or tree-like, evaluation procedure of the previously-used characteristic function is replaced by a linear parameterization. This linear parameterization compresses the backward recursive characteristic function at each estimation step and allows similar terms of the characteristic function to now be combined together, which was previously not possible. Compressing the characteristic function is shown to lead to the elimination of over 99% of terms that previously comprised it after several estimation steps, although the number of terms after a measurement update still grows. Therefore, a method is developed to run the MCE for arbitrary simulation lengths and for the multivariate setting, despite the growing size of the characteristic function. Furthermore, the estimation structure of the MCE is extended to handle nonlinearities in both the system dynamics and the measurement model, in a fashion similar to that of the extended Kalman filter. It is then shown that the MCE algorithm can achieve real-time computational performance by exploiting the parallel structur
Multi-Node computation, also known as distributed computing, is a paradigm that allows for the efficient utilization of multiple interconnected nodes or machines to perform complex computational tasks. By dividing the...
详细信息
Cloud-Edge-End computing provides favorable support for emerging applications by virtue of its low-latency and high reliability. Resource allocation strategies significantly impact the quality of service and response ...
详细信息
Cloud-Edge-End computing provides favorable support for emerging applications by virtue of its low-latency and high reliability. Resource allocation strategies significantly impact the quality of service and response time of cloud-edge-end systems. Due to the decentralized storage of resources, large fluctuations in task demands, and the complexity of cloud-edge-end environments, achieving effective resource allocation remains challenging. Existing resource allocation strategies rely on static models or predefined rules, and cannot flexibly respond to real-time network conditions and changes in task requirements. Consequently, they are limited in achieving optimal resource utilization and effective load balancing. Therefore, this paper proposes an Information Entropy (IE)-driven cloud-edge-end resource allocation mechanism called HyperGAC. First, IE is employed to quantify resource fluctuations, so as to accurately capture the dynamic changes in resource demands. And an Entropy-based Correlation Recurrent Unit (ECRU) is designed to predict server resource utilization. Second, the hypergraph is used to model the complex relationships between resources and tasks, breaking through the limitations of traditional graph structures in multi-dimensional relationships modeling. Finally, the Hypergraph Neural Network (HGNN) is integrated into the actor-critic framework to extract the features from the hypergraph association model, while dynamically optimizing resource allocation strategy based on the prediction results. Experimental results demonstrate that the HyperGAC mechanism significantly improves resource utilization and load balancing performance while effectively reducing the task rejection rate.
IoT deployments that have limited memories lack sustained computation power and have limited connectivity to the Internet due to intermittent last-mile connectivity, particularly in rural and remote locations. For mai...
详细信息
IoT deployments that have limited memories lack sustained computation power and have limited connectivity to the Internet due to intermittent last-mile connectivity, particularly in rural and remote locations. For maintaining congestion-free operations, most of the collected data from these networks are discarded, instead of being transmitted remotely for further processing. In this article, we propose the paradigm timed Loop Storage to distribute the data and use the underutilized bandwidth of local network links for sequentially queuing packets of computational data that are being operated on in parts in one of the IoT nodes. While the sequenced packets are executed sequentially on the target IoT device, the remaining packets, which are currently not being operated on, distribute and keep looping over the network links until they are required for processing. A time-synchronized packet deflection mechanism on each node handles data transfer and looping of individual packets. In our implementation, although we observe that the proposed approach requires data rates of 6 Mbps, it incurs only 45 Kb usage of primary storage systems even for sizeable data, ensuring scalability of the connected IoT devices' temporary storage capabilities, thereby making it useful for real-life applications.
The proceedings contain 7 papers. The topics discussed include: the commercial side of graph analytics: big uses, big mistakes, big opportunities;graph feature management: impact, challenges and opportunities;better d...
ISBN:
(纸本)9798400702013
The proceedings contain 7 papers. The topics discussed include: the commercial side of graph analytics: big uses, big mistakes, big opportunities;graph feature management: impact, challenges and opportunities;better distributed graph query planning with scouting queries;EAGER: explainable question answering using knowledge graphs;going with the flow: real-time max-flow on asynchronous dynamic graphs;future-time temporal path queries;fast synthetic data-aware log generation for temporal declarative models;learning graph neural networks using exact compression;and a demonstration of interpretability methods for graph neural networks.
The proceedings contain 26 papers. The topics discussed include: a selective and biased choice of techniques for building a distributed data store;accelerating the performance of distributed stream processing systems ...
ISBN:
(纸本)9798400701221
The proceedings contain 26 papers. The topics discussed include: a selective and biased choice of techniques for building a distributed data store;accelerating the performance of distributed stream processing systems with in-network computing;secure distributed data and event processing at scale: where are we now?;adaptive distributed streaming similarity joins;I will survive: an event-driven conformance checking approach over process streams;on improving streaming system Autoscaler behavior using windowing and weighting methods;practical forecasting of cryptocoins timeseries using correlation patterns;an exploratory analysis of methods for real-time data deduplication in streaming processes;considerations for integrating virtual threads in a Java framework: a Quarkus example in a resource-constrained environment;and discovery of breakout patterns in financial tick data via parallel stream processing with in-order guarantees.
IoT devices have led to the development of distributed Measurement systems (DMS). However, cyber-attacks have increased, making it crucial to implement security protocols without reducing network throughput. Open firm...
详细信息
暂无评论