An alternative method based on artificial neural networks (ANN) for the estimation of kinetic growth parameters of a microorganism is performed by applying an automatic regression on different sections of the growth c...
详细信息
An alternative method based on artificial neural networks (ANN) for the estimation of kinetic growth parameters of a microorganism is performed by applying an automatic regression on different sections of the growth curve so as to obtain more precise growth rate and lag-time values. Through the combination of genetic algorithms and pruning methods more simple neural networks are obtained, where the goodness of fitness is a combination of an error function with another function associated with the network's complexity. An interesting application of this method was the estimation of kinetic parameters in microbial growth (growth rate and lag-time) and specifically in our case, the analysis of the effect of NaCl concentration, pH level and storage temperature on the growth curves of Lactobacillus plantarum. In this study it was hoped that the architecture of the obtained model would be very simple, but still keep its adequate capacity of generalization. The comparison performed between the average standard error of predictions (SEP) obtained for the estimation of growth rate and lag-time by the automatic regression (20 and 24%, respectively) and by the Gompertz estimation (22 and 28%) showed the utility of this method.
Federated learning (FL) has gained significant attention in academia and industry due to its privacy -preserving nature. FL is a decentralized approach that allows clients to collaborate for model training by exchangi...
详细信息
Federated learning (FL) has gained significant attention in academia and industry due to its privacy -preserving nature. FL is a decentralized approach that allows clients to collaborate for model training by exchanging updates with a parameter server over the internet. This approach utilizes localized data and protects clients' privacy, but it can result in high communication overhead when transmitting a high -dimensional model. This study introduces FedNISP, a federated Convolution Neural Network pruning method based on the Neuron Importance Scope Propagation (NISP) pruning strategy to reduce communication costs. In FedNISP, the importance scores of output layer neurons are back -propagated layer -wise to other neurons in the network. The central server broadcasts the pruned weights to all selected clients. Each participating client reconstructs the full model weights using the binary mask and locally trains the model on its private data. Then, the significant neurons from the locally updated model are selected using the mask and shared with the server. The server receives model updates from participating clients and reconstructs & aggregates the weights. Experiments conducted on the MNIST and CIFAR10 datasets with MNIST2NN and VGGNet models show that FedNISP outperformed Magnitude and Random pruning strategies with minimal accuracy loss while achieving a significant compression ratio.
Recently CNN-based methods have made remarkable progress in broad fields. Both network pruning algorithms and hardware accelerators have been introduced to accelerate CNN. However, existing pruning algorithms have not...
详细信息
ISBN:
(数字)9781728110851
ISBN:
(纸本)9781728110851
Recently CNN-based methods have made remarkable progress in broad fields. Both network pruning algorithms and hardware accelerators have been introduced to accelerate CNN. However, existing pruning algorithms have not fully studied the pattern pruning method, and current index storage scheme of sparse CNN is not efficient. Furthermore, the performance of existing accelerators suffers from no-load PEs on sparse networks. This work proposes a software-hardware co-design to address these problems. The software includes an ADMM-based method which compresses the patterns of convolution kernels with acceptable accuracy loss, and a Huffman encoding method which reduces index storage overhead. The hardware is a fusion-enabled systolic architecture, which can reduce PEs' no-load rate and improve performance by supporting the channel fusion. On CIFAR-10, this work achieves 5.63x index storage reduction with 2-7 patterns among different layers with 0.87% top-1 accuracy loss. Compared with the state-of-art accelerator, this work achieves 1.54x-1.79x performance and 25%-34% reduction of no-load rate with reasonable area and power overheads.
暂无评论