Artificial Intelligence (AI) has emerged as a pivotal technology across various sectors, including healthcare, transportation, and the development of smart cities, revolutionizing service delivery and operational effi...
详细信息
ISBN:
(纸本)9798350349603;9798350349597
Artificial Intelligence (AI) has emerged as a pivotal technology across various sectors, including healthcare, transportation, and the development of smart cities, revolutionizing service delivery and operational efficiency. However, the adoption and introduction of new data-driven services leveraging centralized training models have been hindered by significant concerns over privacy and data security, as these traditional techniques potentially expose sensitive information to breaches. Federated Learning (FL) presents a compelling solution to this dilemma, enabling decentralized data processing without compromising privacy. Integrating Edge AI into this framework, FL enables the collaborative training of models based on data distributed across different clients. Nevertheless, implementing FL on edge devices introduces a set of challenges due to the limited computational and memory resources available on such tiny devices. Specifically, the backpropagation (BP) phase of training models is notably resource-intensive, posing a barrier to efficient deployment. To address this, we replaced the backpropagation phase with a forward-forward (FF) algorithm. Moreover, we integrated and compared several loss functions, namely Hinton, Symba, and Swish, to assess their compatibility and efficiency in the context of forward-forward training within the federated learning framework. The study indicates that our novel method leads to a slight decrease in accuracy for large and complex datasets compared to the traditional BP technique. However, it has the potential to enhance runtime and reduce memory overhead. The proposed technique represents a promising path toward the broader adoption of Edge AI by effectively addressing critical technical challenges, namely privacy concerns and on-chip model training.
The forward-forward (FF) algorithm is a new method for training neural networks, proposed as an alternative to the traditional Backpropagation (BP) algorithm by Hinton. The FF algorithm replaces the backward computati...
详细信息
ISBN:
(纸本)9781665488679
The forward-forward (FF) algorithm is a new method for training neural networks, proposed as an alternative to the traditional Backpropagation (BP) algorithm by Hinton. The FF algorithm replaces the backward computations in the learning process with another forward pass. Each layer has an objective function, which aims to be high for positive data and low for negative ones. This paper presents a preliminary investigation into variations of the FF algorithm, such as incorporating a local Backpropagation to create a hybrid network that robustly converges while preserving the ability to avoid backward computations when needed, for example, in non-differentiable areas of the network. Additionally, a pseudo-random logic for selecting trainable stacks of layers at each epoch is proposed to speed up the learning process.
Backpropagation (BP) algorithm has played a significant role in the development of deep learning. However, there exist some limitations associated with this algorithm, such as getting stuck in local minima and experie...
详细信息
Backpropagation (BP) algorithm has played a significant role in the development of deep learning. However, there exist some limitations associated with this algorithm, such as getting stuck in local minima and experiencing vanishing/exploding gradients, which have led to questions about its biological plausibility. To address these limitations, alternative algorithms to backpropagation have been preliminarily explored, with the forward-forward (FF) algorithm being one of the most well-known. In this paper we propose a new learning framework for neural networks, namely Ca scaded Fo rward ( CaFo ) algorithm, which does not rely on BP optimization as that in FF. Unlike FF, CaFo directly outputs label distributions at each cascaded block and waives the requirement of generating additional negative samples. Consequently, CaFo leads to a more efficient process at both training and testing stages. Moreover, in our CaFo framework each block can be trained in parallel, allowing easy deployment to parallel acceleration systems. The proposed method is evaluated on four public image classification benchmarks, and the experimental results illustrate significant improvement in prediction accuracy in comparison with recently proposed baselines. The code is available at: https://***/Graph-ZKY/CaFo.
We introduce a fast Self-adapting forward-forward Network (SaFF-Net) for medical imaging analysis, mitigating power consumption and resource limitations, which currently primarily stem from the prevalent reliance on b...
详细信息
ISBN:
(纸本)9783031732928;9783031732904
We introduce a fast Self-adapting forward-forward Network (SaFF-Net) for medical imaging analysis, mitigating power consumption and resource limitations, which currently primarily stem from the prevalent reliance on back-propagation for model training and fine-tuning. Building upon the recently proposed forward-forward algorithm (FFA), we introduce the Convolutional forward-forward algorithm (CFFA), a parameter-efficient reformulation that is suitable for advanced image analysis and overcomes the speed and generalisation constraints of the original FFA. To address hyper-parameter sensitivity of FFAs we are also introducing a self-adapting framework SaFF-Net fine-tuning parameters during warmup and training in parallel. Our approach enables more effective model training and eliminates the previously essential requirement for an arbitrarily chosen Goodness function in FFA. We evaluate our approach on several benchmarking datasets in comparison with standard Back-Propagation (BP) neural networks showing that FFA-based networks with notably fewer parameters and function evaluations can compete with standard models, especially, in one-shot scenarios and large batch sizes.
Data-driven models have emerged as popular choices for fault detection and isolation (FDI) in process industries. However, real-time updating of these models due to streaming data requires significant computational re...
详细信息
Data-driven models have emerged as popular choices for fault detection and isolation (FDI) in process industries. However, real-time updating of these models due to streaming data requires significant computational resources, is tedious and therefore pauses difficulty in fault detection. To address this problem, in this study, we have developed a novel forward-learning neural network framework that can efficiently update data-driven models in real time for high-frequency data without compromising the accuracy. The neural network parameters are updated using a suitably constructed forward-forward learning algorithm instead of the traditional back-propagation algorithm. Firstly, we develop a variance-capturing forward-forward autoencoder (VFFAE) for FDI. Further, we showcase that the previously trained VFFAE model can be quickly adapted to incoming data which demonstrate the efficacy of the proposed framework. We have three process case studies to validate the proposed approach, namely, the Tennesse-Eastman dataset, nuclear power flux dataset, and wastewater plant dataset, to validate the proposed approach. Our findings demonstrate that within the initial 90 s, the model underwent 90 updates using a forward-forward approach and only 10 updates using backpropagation-based methods without compromising accuracy. This highlights the model's capacity to effectively handle streaming data during the modeling process.
Deep learning models possess limited flexibility in computational burden in model adaptation owing to the conventional use of backpropagation for model training. To address this problem, we propose an alternate traini...
详细信息
Deep learning models possess limited flexibility in computational burden in model adaptation owing to the conventional use of backpropagation for model training. To address this problem, we propose an alternate training methodology inspired by the forward-forward algorithm originally designed for classification tasks. We extend this concept through a kernel-based modification, enabling its application to regression tasks, which are commonly encountered in process system modeling. Our proposed Kernel-based forward Propagating Neural Network (K-FP-NN) eliminates backpropagation, using layer-wise updates for better adaptability. We introduce a real-time (RT) updating framework, RT-K-FP-NN, to continuously refine model parameters with new data. Results indicate that when applied to model predictive control of a continuous stirred tank reactor (CSTR) system, our approach updates the model within 100 s, achieving better performance metrics compared to backpropagation-based real-time models, which require 326 s. This framework can be applied to various dynamic systems, enhancing real-time decision-making by improving predictive accuracy and system adaptability.
Analog Voice Activity Detector (VAD) is a promising candidate for a power and cost-efficient solution for AIoT voice assistants. Regrettably, the PVT variation from the analog circuits and data misalignment from senso...
详细信息
Analog Voice Activity Detector (VAD) is a promising candidate for a power and cost-efficient solution for AIoT voice assistants. Regrettably, the PVT variation from the analog circuits and data misalignment from sensors limit the VAD accuracy with conventional backpropagation model-based training (BPMBT). This brief presents a forward-forward closed box trainer (FFBBT) for analog VADs. It trains the analog circuit without knowing the circuit model or finding its gradient. Thus, it is insensitive to PVT variation and offset, achieving a measured VAD accuracy improvement of similar to 3% and an accuracy variation reduction of 5.6x. Moreover, a Tensor-Compressed Derivative-Free Optimizer (TCDFO) is also proposed to reduce the required memory for FFBBT by 1600x. The FFBBT with TCDFO is synthesized in 28 nm CMOS with a power of 512 nW and an area of 0.003 mm2.
暂无评论