In many industrial processes, the control systems are the most critical components. Evaluate performance and robustness of a control loops is an important task to maintain the health of a control system and an efficie...
详细信息
ISBN:
(纸本)9783031538292;9783031538308
In many industrial processes, the control systems are the most critical components. Evaluate performance and robustness of a control loops is an important task to maintain the health of a control system and an efficiency in the process. In the area of control-Loop Performance Monitoring (CPM), there are two groups of indices to evaluate the performance of the control loops: stochastic and deterministic. Using stochastic indices, a control engineer can calculate the performance indices of a control loop with the data in normal operation and a minimum knowledge of the process;but the problem is that to do a performance analysis is so hard, due it is necessary an advanced knowledge about the interpretation. Instead, an interpretation or analysis of deterministic indices is simpler;however, the problem with this approach is that an invasive monitoring of the plant is required to calculate the indices. In this paper, it is proposed to use an Artificial Neural Network to estimate deterministic indices, considering as input the stochastic indices and some process information, taking advantage of the fact that data collection for stochastic indices is simpler.
data analytics is pivotal in assessing the technical characteristics and performance of Battery Energy Storage Systems (BESS), underpinning BESS modeling, optimization, and control. PNNL has collected diverse and comp...
详细信息
ISBN:
(纸本)9798350308235
data analytics is pivotal in assessing the technical characteristics and performance of Battery Energy Storage Systems (BESS), underpinning BESS modeling, optimization, and control. PNNL has collected diverse and comprehensive real-world BESS operational datasets in collaboration with the Electric Power Research Institute and multiple Washington State utilities, allowing for BESS analytics and modeling. However, raw datasets frequently harbor anomalies from measurement errors and equipment malfunctions, impacting BESS reliability and analysis accuracy. To address the challenge, this paper presents a methodology for the rapid detection of anomalous charge or discharge cycles within BESS operational data, expediting the cleaning process while ensuring data integrity. Using case studies from real BESS operational datasets, we demonstrate that the proposed method detects anomalies and aids in their resolution, improving system performance characterization. It also reveals recurring data anomaly sources, offering insights for data cleaning. Practitioners can gain valuable insights from the identified anomalous cycles in the real-world datasets along with the investigative process for root cause analyses and essential data cleaning steps.
The linearization of the full model of electrical power systems is of great significance for the adoption of linear analysis techniques to examine the system's dynamic characteristics, as well as for the design an...
详细信息
ISBN:
(纸本)9798350373981;9798350373974
The linearization of the full model of electrical power systems is of great significance for the adoption of linear analysis techniques to examine the system's dynamic characteristics, as well as for the design and tuning of practical controllers. Typically, the state-space model of the power system is first obtained from the time domain model. Linear analysis and controller tuning are then performed utilizing the linear state-space model. This approach however often has several practical limitations, such as the unavailability of a time domain model, when only simulation or measurement data is available, or the lack of linearization capability in the software tool in which the time domain model is available. Moreover, the linearization of the time domain models of large-scale power systems results in very high-dimension state-space models, which greatly complicates further analysis. To this aim, in this paper, suitable linear data-driven models of reduced order are identified for power systems to retain the most relevant modes of oscillations of the original system. A commercial rigorous software is used for the data generation and a well-established Python toolbox is used for the model identification: different models and techniques are applied and then compared in terms of accuracy and simplicity.
Wastewater treatment plants (WWTPs) are complex systems presenting stochastic, non-linear, and non stationary behavior, which makes their operational management very challenging. In this context, data collected from d...
详细信息
Wastewater treatment plants (WWTPs) are complex systems presenting stochastic, non-linear, and non stationary behavior, which makes their operational management very challenging. In this context, data collected from distributed sources across the plant play a central role in the optimized operation and control of WWTPs. However, even when available, the use of collected data is far from trivial due to the coexistence of asynchronous measurements, data with different granularity, measurements of different quality (precision, accuracy), multimodal sources (sensors, spectra, images, hyphenated instrumentation), among other aspects related to the data life cycle. Such heterogeneity in processdata characteristics hinders the application of most off-the-shelf data analytics methods. Flexible solutions able to cope with the complexity of systems and of the data they generate are therefore necessary to overcome these limitations and enable an effective analysis and operation of WWTPs. In this article, data-fusion approaches for handling multiple heterogeneous sources of processdata are developed and comparatively tested. Priority is given to solutions that can flexibly be adapted to different specific operational contexts. The methodologies are tested on an industrial case study (WWTP), where the concentration of a toxin in the effluent stream is to be predicted from available heterogeneous data. Single and multi-source modeling approaches are contemplated and a nested cross-validation method was developed to handle the time-series nature of the models. Bayesian fusion synergistically combines data from different sources considering their uncertainty, standing out among the methodologies tested as offering a good balance in terms of accuracy (RMSEP = 1.34), stability (prequential IQR = 0.034), and flexibility (to accommodate missing and new sources).
During the in-process monitoring of a few synthetic intermediates containing an amide group using reversed-phase liquid chromatography, an unknown impurity was observed. The mass data suggested it to be the nitrile an...
详细信息
During the in-process monitoring of a few synthetic intermediates containing an amide group using reversed-phase liquid chromatography, an unknown impurity was observed. The mass data suggested it to be the nitrile analogue of the corresponding intermediate, which could not be explained by the established reaction pathway(s). A study was undertaken to track the source of this impurity to enable the development of an appropriate control strategy. Subsequent investigations revealed that this impurity was generated in the sample from the amide intermediate in the presence of residual amounts of a heavy-metal catalyst carried forward from the previous step. Additionally, under similar conditions, alternative catalysts with different heavy metals such as rhodium(ii), silver(ii), nickel(ii), platinum(ii), zinc(ii), copper(ii), and ruthenium(ii) did not produce this impurity. analysis of structurally different amide compounds under similar conditions revealed that amides with nitrogen at the alpha or beta position in the structure were less prone to generate this impurity. Also, the nature of aromatic substituents and the composition of the diluent were found to exert a substantial influence on the level of formation of this impurity. A strategy was developed to mitigate or reduce the generation of this impurity during analytical sample preparation by implementing measures such as adding a chelating agent, ethylenediaminetetraacetic acid (EDTA), into the sample or substituting acetonitrile with methanol as the diluent. To gain further insights into the factors contributing to the formation of this impurity, a full-factorial experimental design was performed, examining the effects of temperature, analyte concentration, and Pd content. The outcomes of the modeling experiments indicated that adjusting the sample dilution could serve as an additional control strategy to eliminate the in situ generation of this impurity. A comprehensive exploration of the specific details perta
This paper investigates the Proton Exchange Membrane (PEM) fuel cell as a potential alternative energy source for applications including transportation and emergency power systems. It introduces a novel circuit model ...
详细信息
ISBN:
(纸本)9783031770425;9783031770432
This paper investigates the Proton Exchange Membrane (PEM) fuel cell as a potential alternative energy source for applications including transportation and emergency power systems. It introduces a novel circuit model for a PEM fuel cell that can be used to design and analyze fuel-cell power system. An optimization algorithm is used to identify key parameters, not typically found in manufacturers datasheets, for developing a precise and accurate model to predict fuel cell performance. A new algorithm based on differential evolution (DE) is utilized to compute five previously unknown parameters of a PEMFC. In the optimization process, these parameters are treated as decision variables, and the objective is to minimize the sum square error (SSE) between the estimated and the actual measured cell voltage. The SSE achieved by the DE algorithm was found to be 0.313, demonstrating its effectiveness in accurately predicting fuel cell performance. This precision makes DE particularly suitable for the development of digital twins for fuel-cell applications and control systems in the automotive industry. The study underscores the potential of metaheuristic algorithms like DE in predicting fuelcell performance, aiding in the development and commercialization of digital twins within the automotive sector.
Our world is constantly generating vast amounts of data every day. This article focuses on the data generated by social media platforms. By analyzing data from Sina Weibo, we have discovered three different decay proc...
详细信息
process models are used to represent processes in order to support communication and allow for the simulation and analysis of the processes. Many real-life processes naturally define partial orders over the activities...
详细信息
ISBN:
(纸本)9783031416194;9783031416200
process models are used to represent processes in order to support communication and allow for the simulation and analysis of the processes. Many real-life processes naturally define partial orders over the activities they are composed of. Partial orders can be used as a graph-like representation of process behavior. On the one hand, partially ordered graph representations allow us to easily model concurrent and sequential behavior between activities while ensuring simplicity and scalability. On the other hand, partial orders lack the support for typical process constructs such as choice and loop structures. Therefore, in this paper, we present a novel processmodeling notation, i.e., the Partially Ordered Workflow Language (POWL). A POWL model is a partially ordered graph extended with control-flow operators for modeling choice and loop structures. A POWL model has a hierarchical structure;i.e., POWL models can be combined into a new model either using a control-flow operator or as a partial order. We propose an initial approach to demonstrate the feasibility of using POWL models for process discovery, and we evaluate our approach based on real-life data.
作者:
Guerfel, MohamedMessaoud, HassaniUniversity of Sousse
Higher Institute of Applied Sciences and Technology of Sousse Electronics Department Sousse Tunisia
National Engineering School of Monastir Monastir Tunisia
This paper proposes an innovative Dynamic Principal Component analysis (DPCA) scheme to perform fault detection and identification (FDI) for systems affected by process faults. In this scheme, a new modeling method is...
详细信息
The study proposes a way of developing granular models based on optimized subsets of data with different sampling sizes, in which three generally used models, namely Support Vector Machine, K-Nearest Neighbor, and Lon...
详细信息
The study proposes a way of developing granular models based on optimized subsets of data with different sampling sizes, in which three generally used models, namely Support Vector Machine, K-Nearest Neighbor, and Long Short-Term Memory, are designed and transformed into granular version for achieving a good performance with sufficient functionality. First, a collection of subsets are determined using different sampling methods, which are subsequently applied to play as an essential prerequisite of the proposed models. Then, the principle of justifiable granularity is utilized to the design of interval information granules based on the subsets of data. The design process is associated with a well-defined optimization problem realized by achieving a sound compromise between two conflicting criteria: coverage and specificity. To evaluate the performance of the granular models, two aspects are considered: (i) sampling methods used in determining suitable subsets of data;(ii) different models applied to be transformed into granular models. A series of experimental studies are conducted to verify the feasibility of the proposed granular models.
暂无评论