Background: Accurate modeling of the natural gas desulfurization process enables enterprises to maintain stable production, optimize efficiency, improve product gas quality, and ensure compliance with environmental re...
详细信息
Background: Accurate modeling of the natural gas desulfurization process enables enterprises to maintain stable production, optimize efficiency, improve product gas quality, and ensure compliance with environmental regulations. Considering the limitations of the availability of industrial data, machine learning models, mechanism models, and hybrid models integrating both may become inefficient or inaccurate. Methods: To bridge this gap, a transfer learning-based modeling method for the natural gas desulfurization process was proposed. Firstly, a deep neural network model was developed to predict the hydrogen sulfide content in the product gas, based on mechanism-based calculations. Subsequently, a small dataset from the target scenario was utilized to fine-tune model parameters for accurate predictions under actual production conditions. Significant Findings: The result demonstrates that the established model provides more stable and accurate predictions compared to traditional machine learning models, achieving over a 20 % reduction in prediction error while also enhancing modeling efficiency. Finally, the interpretability analysis of the proposed model reveals that the prediction capability of the model in actual production scenarios was rationally and effectively improved at a low computational cost through transfer learning. This work offers a novel paradigm for developing modeling methods tailored to the practical production processes of natural gas desulfurization.
The growing adoption of liquefied natural gas-fueled vessels (LNGFVs) in maritime traffic systems has heightened the need for advanced resilience modeling to mitigate operational risks and improve safety. This researc...
详细信息
The growing adoption of liquefied natural gas-fueled vessels (LNGFVs) in maritime traffic systems has heightened the need for advanced resilience modeling to mitigate operational risks and improve safety. This research introduces a novel hybrid methodology combining Hidden Markov Models (HMM) and Dynamic Bayesian Networks (DBN) to dynamically assess the resilience of LNGFV operations. Initially, a safety-control feedback structure is developed using Causal analysis System Theory (CAST), revealing critical factors and their interrelationships that influence navigation safety and resilience. Subsequently, an HMM-based quantification model is designed to address latent-node measurement challenges within the DBN framework, enabling precise inference of complex interactions in maritime traffic systems. Real-world data from an LNGFV collision case in Northeast Australia, including ship-sensor and environmental data, are utilized to reconstruct the accident process and analyze the functional interactions between LNG and vessel operations. Simulation results demonstrate that LNGFV traffic resilience evolves through dynamic interactions among the external environment, the vessel, and LNG, exhibiting a fluctuating temporal pattern. Additionally, the proposed Triple Protection Mechanism shows significant potential in enhancing system resilience. This study provides a comprehensive modeling framework and a new perspective for improving the safety and resilience of maritime transportation, particularly for LNGFV operations. The hybrid HMM-DBN approach offers a robust tool for researchers and practitioners to address the challenges of complex maritime systems.
Heart disease is a leading global cause of death, making early prediction crucial for saving lives. Although the exact causes of heart disease are not fully known, its strong links to high mortality, severe morbidity,...
详细信息
Heart disease is a leading global cause of death, making early prediction crucial for saving lives. Although the exact causes of heart disease are not fully known, its strong links to high mortality, severe morbidity, and disability highlight the need for advanced predictive technologies. Artificial Intelligence (AI) plays a crucial role in developing these technologies, with machine learning (ML) being a key tool for analyzing data from AI devices. However, creating effective ML-based approaches for predicting heart disease is highly challenging. This study aims to conduct an empirical analysis of twelve (12) promising machine learning approaches with their detailed mathematical analysis for predicting heart disease with optimal accuracy for diagnostic purposes. It details the application development stages, including dataset collection and dataset attribute evaluation. In dataanalysis, metrics such as accuracy, ROC curves, confusion matrices, and feature importance provide graphical significance, while summary plots of shapely values highlight the significance of features in the applied models, which are used to evaluate the approximate accuracy in detecting heart disease. In conclusion, among the twelve machine learning models evaluated, support vector machines (SVM) achieved the highest performance, with an accuracy of 89%, a receiver operating characteristic (ROC) score of 92%, and a precision-recall curve (PRC) score of 93%.
The management of data is crucial in today's organizations, making it necessary to specify exactly how data is created, accessed, and manipulated during business process enactment. Given the importance of data, it...
详细信息
ISBN:
(纸本)9783031790584;9783031790591
The management of data is crucial in today's organizations, making it necessary to specify exactly how data is created, accessed, and manipulated during business process enactment. Given the importance of data, it comes as a surprise that approaches like BPMN only provide limited support for modelingdata and how it is read and written. In particular, they cannot represent multiple data objects of the same type, and they lack concise semantics for multi-instance data objects. Against this background, this paper proposes an extension to BPMN process models by introducing variable identifiers to distinguish individual data objects of the same class in a given process. The behavior is detailed using translational semantics to Colored Petri nets, and a set of verification mechanisms is presented that allow for a more precise analysis of data objects in business processes.
It is of great significance to realize the accurate prediction of the key output response of the chemical synthetic ammonia process for optimizing system performance and operation monitoring. Because many key intermed...
It is of great significance to realize the accurate prediction of the key output response of the chemical synthetic ammonia process for optimizing system performance and operation monitoring. Because many key intermediate variables of complex systems are difficult to measure comprehensively, there are great difficulties and errors in mechanism analysis and identification modeling techniques. Based on random forest (RF) variable selection, a deep neural network combining temporal convolutional network (TCN) and transformer is proposed to predict the output variables of the synthetic ammonia process. The RF technique is used to select the principal input variables to increase the computational efficiency and the generalization ability of the network. A self-attention mechanism is used to assign biased weights to the data of the key feature variables. A TCN-Transformer network with encoding and decoding techniques is first designed to enhance the correlation of information between variable data, which can extract features of input variables and achieve dynamic modeling of multivariate feature sequences. The network is optimized using a multihead attention mechanism, and the key features are enhanced by probabilistic weight assignment to improve the prediction accuracy. Finally, by comparing with existing methods, the merit and applicability of the proposed network, R 2 = 0.8233, RMSE = 0.0032, and MAE = 0.0024, are verified for predicting the key output of carbon monoxide using offline data generated.
Small-batch workpieces in smart manufacturing demand process parameter modeling, but existing models lack analysis across varying sample sizes and runtime conditions. This study proposes a novel surface-roughness pred...
详细信息
Small-batch workpieces in smart manufacturing demand process parameter modeling, but existing models lack analysis across varying sample sizes and runtime conditions. This study proposes a novel surface-roughness prediction method, Response Surface Methodology-BP Neural Network (RSM-BPNN), designed for experimental data from single small-batch workpieces with varying sample sizes. First, polynomial feature transformation and selection are performed based on the proposed process parameters to improve the feature quality of input data. Second, a Dynamic Central Composite Design-Response Surface Methodology (DCCD-RSM) determines the optimal experimental region and fits surface roughness, while a BPNN trains a deep learning model for prediction. The BPNN fusion method combines both approaches to create a general, adaptive predictive model for surface roughness. Finally, the accuracy and practicality of the BPNN model were verified through reverse calculation and parameter optimization in actual robot grinding experiments. The model demonstrated good predictive performance for surface roughness in aluminum alloy grinding, providing reliable guidance for surface quality prediction and process parameter optimization in small-batch workpieces within the context of smart manufacturing.
This paper is concerned with an event-triggered distributed load frequency control method for multi-area interconnected power systems. Firstly, because of high dimension, nonlinearity and uncertainty of the power syst...
详细信息
This paper is concerned with an event-triggered distributed load frequency control method for multi-area interconnected power systems. Firstly, because of high dimension, nonlinearity and uncertainty of the power system, the relevant model information cannot be fully obtained. To realize the design of LFC algorithm under the condition that the model information is unknown, the equivalent functional relationship between the control signal and the area-control-error signal is established by using a dynamic linearization technique. Secondly, a novel distributed load frequency control algorithm is proposed based on controller dynamic-linearization method and the controller parameters are tuned online by constructing a radial basis function neural network. In addition, to reduce the computation and communication burden on the system, an event-triggered mechanism is also designed, in which whether the data is transmitted at the current instant is completely determined by a triggering condition. Rigorous analysis shows that the proposed method can render the frequency deviation of the power system to converge to a bounded value. Finally, simulation results in a four-area power system verify the effectiveness of the proposed algorithm.
process Mining merges data science and process science that allows for the analysis of recorded processdata by capturing activities within event-logs. It finds more and more applications for the optimization of the p...
详细信息
process Mining merges data science and process science that allows for the analysis of recorded processdata by capturing activities within event-logs. It finds more and more applications for the optimization of the production and administrative processes of private companies and public administrations. This field consists of several areas: process discovery, compliance monitoring, process improvement, and predictive process monitoring. Considering predictive process monitoring, the subarea of next activity prediction helps to obtain a prediction about the next activity performed using control flow data, event data with no attributes other than the timestamp, activity label, and case identifier. A popular approach in this subarea is to use sub-sequences of events, called prefixes and extracted with a sliding window, to predict the next activity. In the literature, several features are added to increase performance. Specifically, this article addresses the problem of predicting the next activity in predictive process monitoring, focusing on the usefulness of temporal features. While past research has explored a variety of features to improve prediction accuracy, the contribution of temporal information remains unclear. In this article it is proposed a comparative analysis of temporal features, such as differences in timestamp, time of day, and day of week, extracted for each event in a prefix. Using both k-fold cross-validation for robust benchmarking and a 75/25 split to simulate real scenarios in which new process events are predicted based on past data, it is shown that timestamp differences within the same prefix consistently outperform other temporal features. Our results are further validated by Shapley's value analysis, highlighting the importance of timestamp differences in improving the accuracy of next activity prediction.
In the domain of rotating machinery, bearings are vulnerable to different mechanical faults, including ball, inner, and outer race faults. Various techniques can be used in condition-based monitoring, from classical s...
详细信息
In the domain of rotating machinery, bearings are vulnerable to different mechanical faults, including ball, inner, and outer race faults. Various techniques can be used in condition-based monitoring, from classical signal analysis to deep learning methods in diagnosing these faults. Based on the complex working conditions of rotary machines, multivariate statistical processcontrol charts such as Hotelling's T2\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$T<^>process$\end{document} and Squared Prediction Error are useful for providing early warnings. However, these methods are rarely applied to condition monitoring of rotating machinery due to the univariate nature of the datasets. In the present paper, we propose a multivariate statistical processcontrol-based fault detection method that utilizes multivariate data composed of Fourier transform features that are extracted for fixed-time batches. Our approach makes use of the multidimensional nature of Fourier transform characteristics, which record more detailed information about the machine's status, in an effort to enhance early defect detection and diagnosis. Experiments with varying vibration measurement locations (Fan End, Drive End), fault types (ball, inner, and outer race faults), and motor loads (0-3 horsepower) are used to validate the suggested approach. The outcomes illustrate our method's effectiveness in fault detection and point to possible wider uses in industrial maintenance.
Traditional approaches typically flatten the process when checking the conformance of complex processes. However, this flattening approach can result in the loss of dependencies between objects, reducing the accuracy ...
详细信息
暂无评论