This paper is concerned with an event-triggered distributed load frequency control method for multi-area interconnected power systems. Firstly, because of high dimension, nonlinearity and uncertainty of the power syst...
详细信息
This paper is concerned with an event-triggered distributed load frequency control method for multi-area interconnected power systems. Firstly, because of high dimension, nonlinearity and uncertainty of the power system, the relevant model information cannot be fully obtained. To realize the design of LFC algorithm under the condition that the model information is unknown, the equivalent functional relationship between the control signal and the area-control-error signal is established by using a dynamic linearization technique. Secondly, a novel distributed load frequency control algorithm is proposed based on controller dynamic-linearization method and the controller parameters are tuned online by constructing a radial basis function neural network. In addition, to reduce the computation and communication burden on the system, an event-triggered mechanism is also designed, in which whether the data is transmitted at the current instant is completely determined by a triggering condition. Rigorous analysis shows that the proposed method can render the frequency deviation of the power system to converge to a bounded value. Finally, simulation results in a four-area power system verify the effectiveness of the proposed algorithm.
The development of digital technologies opens up new opportunities for managing personalized learning using intelligent dataanalysis methods. The purpose of a comprehensive analysis of the data obtained in the learni...
详细信息
Agent-based simulations can be helpful in understanding the complex dynamics of human behavior. data-driven approaches for this purpose show to be promising in extracting complex features, without relying on system-sp...
详细信息
ISBN:
(纸本)9798350350562;9781713899310
Agent-based simulations can be helpful in understanding the complex dynamics of human behavior. data-driven approaches for this purpose show to be promising in extracting complex features, without relying on system-specific expert knowledge. This work aims to develop a data-driven approach that enables automatic generation of agent-based pedestrian flow models, by extracting and classifying regions of interest from trajectory data. For validation purposes, synthetic data from a pedestrian movement simulation was used for the method development. We identify stay point areas from the resulting trajectories, classify the processes occurring in these areas, and reconstruct their properties. The relevant areas and types of processes were successfully extracted in four different case scenarios. However, it is necessary to test and subsequently improve these methods by using real data. Ultimately, our methods should be applied for the automatic modeling of pedestrian behavior in critical infrastructures, such as a railway station or an airport.
Accurately predicting stock prices remains a formidable challenge in financial markets. Traditional predictive models often aggregate data from multiple companies, failing to account for the unique characteristics of ...
详细信息
Accurately predicting stock prices remains a formidable challenge in financial markets. Traditional predictive models often aggregate data from multiple companies, failing to account for the unique characteristics of each firm, which can hinder the model's ability to identify company-specific patterns. Moreover, existing research on stock price prediction frequently trains and tests models within the same group of companies, neglecting to assess their generalizability on 'Out-of-Sample' companies. This study addresses these limitations by employing BERT to encode business descriptions into vectors, capturing the distinctive attributes of each company. We further enhance the predictive modeling framework by developing features that describe the percentage change of existing indicators, adding significant novelty to the existing research. Additionally, we apply a Restricted Boltzmann Machine (RBM) for dimensionality reduction after the BERT encoding process. In our approach, both the technical indicators and the vectorized descriptions are treated as distinct elements within the transformer encoder. By integrating these representations, our model is better equipped to differentiate between firms and recognize their individual patterns. The proposed model demonstrates superior performance over baseline models, particularly when tested on 'Out-of-Sample' companies, highlighting its ability to learn, understand, and analyze company-specific descriptions for more accurate predictions. This research offers novel insights into addressing the heterogeneity in stock price prediction.
Postoperative critical care management of congenital heart disease patients requires prompt intervention when the patient deviates significantly from clinician-determined vital sign and hemodynamic goals. Current moni...
详细信息
Postoperative critical care management of congenital heart disease patients requires prompt intervention when the patient deviates significantly from clinician-determined vital sign and hemodynamic goals. Current monitoring systems only allow for static thresholds to be set on individual variables, despite the expectations that these signals change as the patient recovers and that variables interact. To address this incongruency, we have employed statistical process monitoring (SPM) techniques originally developed to monitor batch industrial processes to monitor high-frequency vital sign and hemodynamic data to establish multivariate trajectory maps for patients with d-transposition of the great arteries following the arterial switch operation. In addition to providing multivariate trajectory maps, the multivariate control charts produced by the SPM framework allow for assessment of adherence to the desired trajectory at each time point as the data is collected. control charts based on slow feature analysis were compared with those based on principal component analysis. Alarms generated by the multivariate control charts are discussed in the context of the available clinical documentation.
Based on the theory of process reengineering and internal control combined with the latest RPA and OCR technology, this paper optimizes and reconstructs the intelligent account reimbursement system, intelligent budget...
详细信息
In order to precisely construct the nonlinear dynamic analysis model of the N-cycloidal pin reducer in the transmission process, the paper employs the Load tooth contact analysis (LTCA) to determine the time-varying m...
详细信息
ISBN:
(纸本)9798350321050
In order to precisely construct the nonlinear dynamic analysis model of the N-cycloidal pin reducer in the transmission process, the paper employs the Load tooth contact analysis (LTCA) to determine the time-varying meshing clearance and the time-varying torsional stiffness, and it takes into account the influence of the revolution displacement and meshing damping of the cycloid gear pair. Based on the lumped parameter method build a six degree of freedom (DOF) nonlinear dynamic analysis model. The dynamic differential equation is deduced using the general Lagrange function. Then, using the fourth-order Runge Kutta method to calculates the dynamic response. The transmission errors (TE) under various examples of profile modification are calculated and analyzed through the result of dynamic response. The results show that the transmission error value and the fluctuation increase with the meshing clearance, leading to a decrease in reducer stability.
Time series missing data is a pervasive problem in many fields, especially in intelligent transportation system, which hinders the application of timing analysis methods and the fine adjustment of control strategies. ...
详细信息
Time series missing data is a pervasive problem in many fields, especially in intelligent transportation system, which hinders the application of timing analysis methods and the fine adjustment of control strategies. The prevalent imputation approaches reconstruct missing data with a high accuracy by exploiting a precise distribution model. But the multistate characteristic of time series data and the uncertainty of imputation process increase the difficulty of modeling temporal data distribution and reduce the imputation performance. In this paper, a novel time series generative adversarial imputation network (TGAIN) model is proposed to deal with time series data missing problem. The model combines the advantages of GAN's data distribution modeling and multiple imputation's uncertainty handling. Specifically, the TGAIN network is designed and adversarial trained to learn the multistate distribution of missing time series data. Through the conditional vector constraint and adversarial imputation process, the latent distribution for each missing position under different states can be effectively estimated based on implicit relationships with partial observation information. Then the corresponding multiple imputation strategy is proposed to deal with the uncertainty of imputation process and it can determine the best fill value from the learned distribution. Furthermore, sufficient experiments have been conducted in two real traffic flow datasets. The comparative results show the proposed TGAIN not only has better ability on time series data distribution modeling and imputation uncertainty handling, but also performs more robustly and stability even with the missing rate increases.
Industrial soft sensing plays a crucial role in processmodeling, optimization, and control, and is extensively utilized to predict hard-to-measure process variables. To effectively manage complex processdata with va...
详细信息
Industrial soft sensing plays a crucial role in processmodeling, optimization, and control, and is extensively utilized to predict hard-to-measure process variables. To effectively manage complex processdata with varying time scales, a novel multiscale trend decomposition long short-term memory model based on feature selection (FS-MSTD-LSTM) is proposed. In the FS-MSTD-LSTM model, the random forest method is first employed to screen the feature sequences, thereby constructing the optimal input feature set and ensuring the model is provided with the most informative features. Following this feature selection, the seasonal-trend decomposition using loess algorithm is used to perform multi-scale sampling, capturing input features with short-, medium-, and long-term scales. Next, three LSTM subnetworks are constructed using these three-scale input features to perform predictions in a shared learning format, efficiently handling complex processdata across different time scales. Consequently, the FS-MSTD-LSTM model for industrial soft sensing is developed. The performance of the proposed FS-MSTD-LSTM model is evaluated using two industrial datasets, and simulations on these datasets demonstrate that the FS-MSTD-LSTM model achieves higher soft sensing accuracy compared to other related methods, indicating its superior performance in managing complex processdata with varying time scales.
Open-set Semi-supervised Learning (OSSL) holds a realistic setting that unlabeled data may come from classes unseen in the labeled set, i.e., out-of-distribution (OOD) data, which could cause performance degradation i...
详细信息
Open-set Semi-supervised Learning (OSSL) holds a realistic setting that unlabeled data may come from classes unseen in the labeled set, i.e., out-of-distribution (OOD) data, which could cause performance degradation in conventional SSL models. To handle this issue, except for the traditional in-distribution (ID) classifier, some existing OSSL approaches employ an extra OOD detection module to avoid the potential negative impact of the OOD data. Nevertheless, these approaches typically employ the entire set of open-set data during their training process, which may contain data unfriendly to the OSSL task that can negatively influence the model performance. This inspires us to develop a robust open-set data selection strategy for OSSL. Through a theoretical understanding from the perspective of learning theory, we propose Wise Open-set Semi-supervised Learning (WiseOpen), a generic OSSL framework that selectively leverages the open-set data for training the model. By applying a gradient-variance-based selection mechanism, WiseOpen exploits a friendly subset instead of the whole open-set dataset to enhance the model's capability of ID classification. Moreover, to reduce the computational expense, we also propose two practical variants of WiseOpen by adopting low-frequency update and loss-based selection respectively. Extensive experiments demonstrate the effectiveness of WiseOpen in comparison with the state-of-the-art.
暂无评论