This study investigates the deterministic learning(DL)-based output-feedback neural control for a class of nonlinear sampled-data systems with prescribed performance(PP). Specifically, first, a sampleddata observer is...
详细信息
This study investigates the deterministic learning(DL)-based output-feedback neural control for a class of nonlinear sampled-data systems with prescribed performance(PP). Specifically, first, a sampleddata observer is employed to estimate the unavailable system states for the Euler discretization model of the transformed system dynamics. Then, based on the observations and backstepping method, a discrete neural network(NN) controller is constructed to ensure system stability and achieve the desired tracking performance. The noncausal problem encountered during the controller deduction process is resolved using a command filter. Moreover, the regression characteristics of the NN input signals are demonstrated with the observed states. This ensures that the radial basis function NN, based on DL theory, meets the partial persistent excitation condition. Subsequently, a class of discrete linear time-varying systems is proven to be exponentially stable, achieving partial convergence of neural weights to their optimal/actual values. Consequently, accurate modeling of unknown closed-loop dynamics is achieved along the system trajectory from the output-feedback control. Finally, a knowledge-based controller is developed using the modeling *** controller not only enhances the control performance but also ensures the PP of the tracking error. The effectiveness of the scheme is illustrated through simulation results.
State-of-the-art recommender systems are increasingly focused on optimizing implementation efficiency, such as enabling on-device recommendations under memory constraints. Current methods commonly use lightweight embe...
详细信息
State-of-the-art recommender systems are increasingly focused on optimizing implementation efficiency, such as enabling on-device recommendations under memory constraints. Current methods commonly use lightweight embeddings for users and items or employ compact embeddings to enhance reusability and reduce memory usage. However, these approaches consider only the coarse-grained aspects of embeddings, overlooking subtle semantic nuances. This limitation results in an adversarial degradation of meta-embedding performance, impeding the system's ability to capture intricate relationships between users and items, leading to suboptimal recommendations. To address this, we propose a novel approach to efficiently learn meta-embeddings with varying grained and apply fine-grained meta-embeddings to strengthen the representation of their coarse-grained counterparts. Specifically, we introduce a recommender system based on a graph neural network, where each user and item is represented as a node. These nodes are directly connected to coarse-grained virtual nodes and indirectly linked to fine-grained virtual nodes, facilitating learning of multi-grained semantics. Fine-grained semantics are captured through sparse meta-embeddings, which dynamically balance embedding uniqueness and memory constraints. To ensure their sparseness, we rely on initialization methods such as sparse principal component analysis combined with a soft thresholding activation function. Moreover, we propose a weight-bridging update strategy that aligns coarse-grained meta-embedding with several fine-grained meta-embeddings based on the underlying semantic properties of users and items. Comprehensive experiments demonstrate that our method outperforms existing baselines. The code of our proposal is available at https://***/htyjers/C2F-MetaEmbed.
Research on neuromorphic computing is driven by the vision that we can emulate brain-like computing capability, learning capability, and energy-efficiency in novel hardware. Unfortunately, this vision has so far been ...
详细信息
Research on neuromorphic computing is driven by the vision that we can emulate brain-like computing capability, learning capability, and energy-efficiency in novel hardware. Unfortunately, this vision has so far been pursued in a half-hearted manner. Most current neuromorphic hardware (NMHW) employs brain-like spiking neurons instead of standard artificial neurons.
Researchers have recently achieved significant advances in deep learning techniques, which in turn has substantially advanced other research disciplines, such as natural language processing, image processing, speech r...
详细信息
Researchers have recently achieved significant advances in deep learning techniques, which in turn has substantially advanced other research disciplines, such as natural language processing, image processing, speech recognition, and software engineering. Various deep learning techniques have been successfully employed to facilitate software engineering tasks, including code generation, software refactoring, and fault localization. Many studies have also been presented in top conferences and journals, demonstrating the applications of deep learning techniques in resolving various software engineering tasks. However,although several surveys have provided overall pictures of the application of deep learning techniques in software engineering,they focus more on learning techniques, that is, what kind of deep learning techniques are employed and how deep models are trained or fine-tuned for software engineering tasks. We still lack surveys explaining the advances of subareas in software engineering driven by deep learning techniques, as well as challenges and opportunities in each subarea. To this end, in this study, we present the first task-oriented survey on deep learning-based software engineering. It covers twelve major software engineering subareas significantly impacted by deep learning techniques. Such subareas spread out through the whole lifecycle of software development and maintenance, including requirements engineering, software development, testing, maintenance, and developer collaboration. As we believe that deep learning may provide an opportunity to revolutionize the whole discipline of software engineering, providing one survey covering as many subareas as possible in software engineering can help future research push forward the frontier of deep learning-based software engineering more systematically. For each of the selected subareas,we highlight the major advances achieved by applying deep learning techniques with pointers to the available datasets i
In the paper, we investigate the optimization problem(OP) by applying the optimal control method. The optimization problem is reformulated as an optimal control problem(OCP) where the controller(iteration updating) is...
详细信息
In the paper, we investigate the optimization problem(OP) by applying the optimal control method. The optimization problem is reformulated as an optimal control problem(OCP) where the controller(iteration updating) is designed to minimize the sum of costs in the future time instant, which thus theoretically generates the “optimal algorithm”(fastest and most stable). By adopting the maximum principle and linearization with Taylor expansion, new algorithms are proposed. It is shown that the proposed algorithms have a superlinear convergence rate and thus converge more rapidly than the gradient descent;meanwhile, they are superior to Newton's method because they are not divergent in general and can be applied in the case of a singular or indefinite Hessian matrix. More importantly, the OCP method contains the gradient descent and the Newton's method as special cases, which discovers the theoretical basis of gradient descent and Newton's method and reveals how far these algorithms are from the optimal algorithm. The merits of the proposed optimization algorithm are illustrated by numerical experiments.
The maximum principle has bridged mathematical optimization to optimal control,ushering in significant developments and refinements in optimal control theory,notably during the 1960s with the advent of linear quadrati...
详细信息
The maximum principle has bridged mathematical optimization to optimal control,ushering in significant developments and refinements in optimal control theory,notably during the 1960s with the advent of linear quadratic (LQ)control and linear quadratic estimation (LQE).This progression propelled optimal control theory into further advancements,encompassing stochastic control,robust/H-infinity control,model predictive control (MPC),networked control,and reinforcement learning *** control,established upon a rigorous mathematical foundation,extends static optimization theory to dynamic systems,exhibiting scientific essence,unity,and ***,since its inception,optimal control theory has served as an indispensable core role across all control-related domains,including communication-constrained control in networked systems,consensus control,cooperative control,and reinforcement learning control.
The use of metamaterial enhances the performance of a specific class of antennas known as metamaterial *** radiation cost and quality factor of the antenna are influenced by the size of the *** antennas allow for the ...
详细信息
The use of metamaterial enhances the performance of a specific class of antennas known as metamaterial *** radiation cost and quality factor of the antenna are influenced by the size of the *** antennas allow for the circumvention of the bandwidth restriction for small *** parameters have recently been predicted using machine learning algorithms in existing *** learning can take the place of the manual process of experimenting to find the ideal simulated antenna *** accuracy of the prediction will be primarily dependent on the model that is *** this paper,a novel method for forecasting the bandwidth of the metamaterial antenna is proposed,based on using the Pearson Kernel as a standard *** with these new approaches,this paper suggests a unique hypersphere-based normalization to normalize the values of the dataset attributes and a dimensionality reduction method based on the Pearson kernel to reduce the dimension.A novel algorithm for optimizing the parameters of Convolutional Neural Network(CNN)based on improved Bat Algorithm-based Optimization with Pearson Mutation(BAO-PM)is also presented in this *** prediction results of the proposed work are better when compared to the existing models in the literature.
With the rise of artificial intelligence and cloud computing, machine-learning-as-a-service platforms,such as Google, Amazon, and IBM, have emerged to provide sophisticated tasks for cloud applications. These propriet...
详细信息
With the rise of artificial intelligence and cloud computing, machine-learning-as-a-service platforms,such as Google, Amazon, and IBM, have emerged to provide sophisticated tasks for cloud applications. These proprietary models are vulnerable to model extraction attacks due to their commercial value. In this paper, we propose a time-efficient model extraction attack framework called Swift Theft that aims to steal the functionality of cloud-based deep neural network models. We distinguish Swift Theft from the existing works with a novel distribution estimation algorithm and reference model settings, finding the most informative query samples without querying the victim model. The selected query samples can be applied to various cloud models with a one-time selection. We evaluate our proposed method through extensive experiments on three victim models and six datasets, with up to 16 models for each dataset. Compared to the existing attacks, Swift Theft increases agreement(i.e., similarity) by 8% while consuming 98% less selecting time.
WiFi-based indoor positioning has emerged as a crucial technology for enabling smart consumer electronic applications, particularly in large-scale buildings. The construction of WiFi fingerprint databases using receiv...
详细信息
The poor surface conditions and osseointegration capacity of 3D printed Ti6Al4V implants(3DPT)sig-nificantly influence their performance as orthopedic and dental *** this work,we creatively introduce a one-step femtos...
详细信息
The poor surface conditions and osseointegration capacity of 3D printed Ti6Al4V implants(3DPT)sig-nificantly influence their performance as orthopedic and dental *** this work,we creatively introduce a one-step femtosecond laser treatment to improve the surface conditions and *** surface characterization,mechanical properties,corrosion resistance,and biological responses were *** results found that femtosecond laser eliminated defects like embedded powders and superficial cracks while forming the nano cones-like structures surface on 3DPT,leading to enhanced osseointegration,anti-corrosion,and anti-fatigue *** dynamics simulations revealed the ablation removal mechanism and the formation of nano cone-like *** findings were fur-ther supported by the in vivo studies,showing that the FS-treated implants had superior bone-implant contact and ***,the one-step femtosecond laser method is regarded as a promising surface modification method for improving the functional performance of Ti-based orthopedic implants.
暂无评论