This study introduces a data-driven approach for state and output feedback control addressing the constrained output regulation problem in unknown linear discrete-time systems. Our method ensures effective tracking pe...
详细信息
This study introduces a data-driven approach for state and output feedback control addressing the constrained output regulation problem in unknown linear discrete-time systems. Our method ensures effective tracking performance while satisfying the state and input constraints, even when system matrices are not available. We first establish a sufficient condition necessary for the existence of a solution pair to the regulator equation and propose a data-based approach to obtain the feedforward and feedback control gains for state feedback control using linear programming. Furthermore, we design a refined Luenberger observer to accurately estimate the system state, while keeping the estimation error within a predefined set. By combining output regulation theory, we develop an output feedback control strategy. The stability of the closed-loop system is rigorously proved to be asymptotically stable by further leveraging the concept of λ-contractive sets.
Ciphertext Policy-Attribute Based Encryption (CP-ABE) is a secure one-to-many asymmetric encryption schemes where access control of a shared resource is defined in terms of a set of attributes possessed by a user. Key...
详细信息
Link prediction stands as a crucial network challenge, garnering attention over the past decade, with its significance heightened by the escalating volume of network data. In response to the pressing need for swift re...
详细信息
Link prediction stands as a crucial network challenge, garnering attention over the past decade, with its significance heightened by the escalating volume of network data. In response to the pressing need for swift research focus, this study introduces an innovative approach—the Anchor-aware Graph Autoencoder integrated with the Gini Index (AGA-GI)—aimed at gathering data on the global placements of link nodes within the link prediction framework. The proposed methodology encompasses three key components: anchor points, node-to-anchor paths, and node embedding. Anchor points within the network are identified by leveraging the graph structure as an input. The determination of anchor positions involves computing the Gini indexes (GI) of nodes, leading to the generation of a candidate set of anchors. Typically, these anchor points are distributed across the network structure, facilitating substantial informational exchanges with other nodes. The location-based similarity approach computes the paths between anchor points and nodes. It identifies the shortest path, creating a node path information function that incorporates feature details and location similarity. The ultimate embedding representation of the node is then formed by amalgamating attributes, global location data, and neighbourhood structure through an auto-encoder learning methodology. The Residual Capsule Network (RCN) model acquires these node embeddings as input to learn the feature representation of nodes and transforms the link prediction problem into a classification task. The suggested (AGA-GI) model undergoes comparison with various existing models in the realm of link prediction. These models include Attributes for Link Prediction (SEAL), Embeddings, Subgraphs, Dual-Encoder graph embedding with Alignment (DEAL), Embeddings and Spectral Clustering (SC), Deep Walk (DW), Graph Auto-encoder (GAE), Variational Graph Autoencoders (VGAE), Graph Attention Network (GAT), and Graph Conversion Capsule Link (G
Over the last couple of decades,community question-answering sites(CQAs)have been a topic of much academic *** have often leveraged traditional machine learning(ML)and deep learning(DL)to explore the ever-growing volu...
详细信息
Over the last couple of decades,community question-answering sites(CQAs)have been a topic of much academic *** have often leveraged traditional machine learning(ML)and deep learning(DL)to explore the ever-growing volume of content that CQAs *** clarify the current state of the CQA literature that has used ML and DL,this paper reports a systematic literature *** goal is to summarise and synthesise the major themes of CQA research related to(i)questions,(ii)answers and(iii)*** final review included 133 *** research themes include question quality,answer quality,and expert *** terms of dataset,some of the most widely studied platforms include Yahoo!Answers,Stack Exchange and Stack *** scope of most articles was confined to just one platform with few cross-platform *** with ML outnumber those with ***,the use of DL in CQA research is on an upward trajectory.A number of research directions are proposed.
The software development projects’ testing part is usually expensive and complex, but it is essential to gauge the effectiveness of the developed software. Software Fault Prediction (SFP) primarily serves to detect f...
详细信息
Ransomware is one of the most advanced malware which uses high computer resources and services to encrypt system data once it infects a system and causes large financial data losses to the organization and individuals...
详细信息
Workload pattern learning-based resource management is crucial for cloud computing environments for achieving higher performance, sustainability, fault-tolerance, and quality of service. The existing literature lacks ...
详细信息
The medical domain faces unique challenges in information Retrieval (IR) due to the complexity of medical language and terminology discrepancies between user queries and documents. While traditional Keyword-Based Meth...
详细信息
This systematic review gave special attention to diabetes and the advancements in food and nutrition needed to prevent or manage diabetes in all its forms. There are two main forms of diabetes mellitus: Type 1 (T1D) a...
详细信息
Glaucoma is currently one of the most significant causes of permanent blindness. Fundus imaging is the most popular glaucoma screening method because of the compromises it has to make in terms of portability, size, an...
详细信息
Glaucoma is currently one of the most significant causes of permanent blindness. Fundus imaging is the most popular glaucoma screening method because of the compromises it has to make in terms of portability, size, and cost. In recent years, convolution neural networks (CNNs) have revolutionized computer vision. Convolution is a "local" CNN technique that is only applicable to a small region surrounding an image. Vision Transformers (ViT) use self-attention, which is a "global" activity since it collects information from the entire image. As a result, the ViT can successfully gather distant semantic relevance from an image. This study examined several optimizers, including Adamax, SGD, RMSprop, Adadelta, Adafactor, Nadam, and Adagrad. With 1750 Healthy and Glaucoma images in the IEEE fundus image dataset and 4800 healthy and glaucoma images in the LAG fundus image dataset, we trained and tested the ViT model on these datasets. Additionally, the datasets underwent image scaling, auto-rotation, and auto-contrast adjustment via adaptive equalization during preprocessing. The results demonstrated that preparing the provided dataset with various optimizers improved accuracy and other performance metrics. Additionally, according to the results, the Nadam Optimizer improved accuracy in the adaptive equalized preprocessing of the IEEE dataset by up to 97.8% and in the adaptive equalized preprocessing of the LAG dataset by up to 92%, both of which were followed by auto rotation and image resizing processes. In addition to integrating our vision transformer model with the shift tokenization model, we also combined ViT with a hybrid model that consisted of six different models, including SVM, Gaussian NB, Bernoulli NB, Decision Tree, KNN, and Random Forest, based on which optimizer was the most successful for each dataset. Empirical results show that the SVM Model worked well and improved accuracy by up to 93% with precision of up to 94% in the adaptive equalization preprocess
暂无评论