In this paper, we propose a mapping-aware weight pruning method for in-memory computing (IMC) architectures that operate on a row-by-row basis. Our proposed method can dynamically skip unnecessary row operations to mi...
详细信息
We propose the lattice design that allows multiple topologically protected edge modes. The scattering between these modes, which is linear, energy preserving, and robust against local disorders, is discussed in terms ...
The low-frequency stability of grid-connected virtual synchronous generators (VSGs) can be degraded if power controller parameters are not properly tuned under high active power transfer and strong grid conditions. Th...
详细信息
Rapid increase in the large quantity of industrial data,Industry 4.0/5.0 poses several challenging issues such as heterogeneous data generation,data sensing and collection,real-time data processing,and high request ar...
详细信息
Rapid increase in the large quantity of industrial data,Industry 4.0/5.0 poses several challenging issues such as heterogeneous data generation,data sensing and collection,real-time data processing,and high request arrival *** classical intrusion detection system(IDS)is not a practical solution to the Industry 4.0 environment owing to the resource limitations and *** resolve these issues,this paper designs a new Chaotic Cuckoo Search Optimiza-tion Algorithm(CCSOA)with optimal wavelet kernel extreme learning machine(OWKELM)named CCSOA-OWKELM technique for IDS on the Industry 4.0 *** CCSOA-OWKELM technique focuses on the design of feature selection with classification approach to achieve minimum computation complex-ity and maximum detection *** CCSOA-OWKELM technique involves the design of CCSOA based feature selection technique,which incorpo-rates the concepts of chaotic maps with ***,the OWKELM technique is applied for the intrusion detection and classification *** addition,the OWKELM technique is derived by the hyperparameter tuning of the WKELM technique by the use of sunflower optimization(SFO)*** utilization of CCSOA for feature subset selection and SFO algorithm based hyperparameter tuning leads to better *** order to guarantee the supreme performance of the CCSOA-OWKELM technique,a wide range of experiments take place on two benchmark datasets and the experimental outcomes demonstrate the promis-ing performance of the CCSOA-OWKELM technique over the recent state of art techniques.
With the growth of big data in the past few decades, compression has become inseparable from data generation. The data generated daily across different platforms are correlated: friend networks on Facebook and Instagr...
详细信息
Digital compute-in-memory (DCIM) architectures are becoming crucial for real-time and accurate deep neural network (DNN) inference due to their capacity for precise computations. However, traditional DCIM systems ofte...
详细信息
The virtual synchronous generator (VSG) is an attractive grid-forming technique that emulates the swing equation of conventional synchronous generators, and can compensate for the power grid's weakened inertia and...
详细信息
Currently, the urgent task is developing the methods and tools for increasing Smart Parking software system security. The purpose of this study is conducting analysis of requirements for Smart Parking System software ...
详细信息
The area of neurosymbolic artificial intelligence (Neurosymbolic AI) is rapidly developing and has become a popular research topic, encompassing subfields, such as neurosymbolic deep learning and neurosymbolic reinfor...
详细信息
An increasingly popular machine learning paradigm is to pretrain a neural network (NN) on many tasks offline, then adapt it to downstream tasks, often by re-training only the last linear layer of the network. This app...
详细信息
An increasingly popular machine learning paradigm is to pretrain a neural network (NN) on many tasks offline, then adapt it to downstream tasks, often by re-training only the last linear layer of the network. This approach yields strong downstream performance in a variety of contexts, demonstrating that multitask pretraining leads to effective feature learning. Although several recent theoretical studies have shown that shallow NNs learn meaningful features when either (i) they are trained on a single task or (ii) they are linear, very little is known about the closer-to-practice case of nonlinear NNs trained on multiple tasks. In this work, we present the first results proving that feature learning occurs during training with a nonlinear model on multiple tasks. Our key insight is that multi-task pretraining induces a pseudo-contrastive loss that favors representations that align points that typically have the same label across tasks. Using this observation, we show that when the tasks are binary classification tasks with labels depending on the projection of the data onto an r-dimensional subspace within the d rdimensional input space, a simple gradient-based multitask learning algorithm on a two-layer ReLU NN recovers this projection, allowing for generalization to downstream tasks with sample and neuron complexity independent of d. In contrast, we show that with high probability over the draw of a single task, training on this single task cannot guarantee to learn all r ground-truth features. Copyright 2024 by the author(s)
暂无评论