Deep learning models for computer vision in remote sensing such as Convolutional Neural Network (CNN) has benefitted acceleration from the usage of multiple CPUs and GPUs. There are several ways to make the training s...
详细信息
ISBN:
(纸本)9781510666184;9781510666191
Deep learning models for computer vision in remote sensing such as Convolutional Neural Network (CNN) has benefitted acceleration from the usage of multiple CPUs and GPUs. There are several ways to make the training stage more effective in terms of utilizing multiple cores at the same time by processing different image mini-batches with a duplicated model called distributeddataparallelization (DDP) and computing the parameters in a lower precision floating-point number called Automatic Mixed Precision (AMP). We would like to investigate the impact of DDP and AMP training modes on the overall utilization and memory consumption of CPU and GPU, as well as the accuracy of a CNN model. The study is performed on the EuroSAT dataset, a Sentinel-2-based benchmark satellite image dataset for image classification of land covers. We compare training using 1 CPU, using DDP, and using both DDP and AMP over 100 epochs using ResNet-18 architecture. The hardware that we used are Intel Xeon Silver 4116 CPU with 24 cores and an NVIDIA v100 GPU. We find that although parallelization of CPUs or DDP takes less time to train on the images, it can take 50 MB more memory than using only a single CPU. The combination of DDP and AMP can release memory up to 160 MB and reduce computation time by 20 seconds. The test accuracy is slightly higher for both DDP and DDP-AMP at 90.61% and 90.77% respectively than without DDP and AMP at 89.84%. Hence, training using distributeddataparallelization (DDP) and Automatic Mixed Precision (AMP) has more benefits in terms of lower GPU memory consumption, faster training execution time, faster convergence towards solutions, and finally, higher accuracy.
暂无评论