Accurate displacement calculation in the laminated iron cores of electric machines is important for the accurate prediction of noise generation in the electric machines. The authors proposed a modeling method of displ...
详细信息
The onset of Industry 4.0 is rapidly transforming the manufacturing world through the integration of cloud computing, machine learning (ML), artificial intelligence (AI), and universal network connectivity, resulting ...
详细信息
Network coverage prediction is an important aspect of 4G network planning and optimization. In this study, we conducted a comprehensive analysis of the performance of various machine learning algorithms in predicting ...
详细信息
An increasingly popular machine learning paradigm is to pretrain a neural network (NN) on many tasks offline, then adapt it to downstream tasks, often by re-training only the last linear layer of the network. This app...
详细信息
An increasingly popular machine learning paradigm is to pretrain a neural network (NN) on many tasks offline, then adapt it to downstream tasks, often by re-training only the last linear layer of the network. This approach yields strong downstream performance in a variety of contexts, demonstrating that multitask pretraining leads to effective feature learning. Although several recent theoretical studies have shown that shallow NNs learn meaningful features when either (i) they are trained on a single task or (ii) they are linear, very little is known about the closer-to-practice case of nonlinear NNs trained on multiple tasks. In this work, we present the first results proving that feature learning occurs during training with a nonlinear model on multiple tasks. Our key insight is that multi-task pretraining induces a pseudo-contrastive loss that favors representations that align points that typically have the same label across tasks. Using this observation, we show that when the tasks are binary classification tasks with labels depending on the projection of the data onto an r-dimensional subspace within the d rdimensional input space, a simple gradient-based multitask learning algorithm on a two-layer ReLU NN recovers this projection, allowing for generalization to downstream tasks with sample and neuron complexity independent of d. In contrast, we show that with high probability over the draw of a single task, training on this single task cannot guarantee to learn all r ground-truth features. Copyright 2024 by the author(s)
This paper investigates the application of Explainable AI (XAI) techniques in evaluating the features of indoor positioning systems, with a focus on improving model transparency and interpretability. Indoor positionin...
详细信息
Delay-sensitive applications are becoming more and more in demand as a result of the development of information systems and the expansion of communication in cloud computing technologies. Some of these requests will b...
详细信息
A hybrid digital holographic microscope is used to record a hologram of biological cells with fluorescent and phase *** resultant hologram is processed for phase recovery using the deep learning algorithm to extract 3...
详细信息
This paper presents the human fall detection by convolutional neural network (CNN) classification and MobileNet algorithm. In order to reduce the number of parameter, we propose a vision-based fall detection model, th...
详细信息
The rapid expansion of extended electric vehicle (xEV) adoption necessitates optimizing energy storage systems (ESS) management for enhanced performance, longevity, and reliability. However, traditional ESS management...
详细信息
Production machinery failures often cause significant financial losses, and traditional maintenance methods may struggle to mitigate these issues effectively. Predictive maintenance offers a modern solution by forecas...
详细信息
暂无评论